With the development of social informationization, more and more people like to use photography to record the details of their lives. However, these styles are only achieved through modern technology and cannot meet people's needs for some oil paintings. Currently, image-to-image style transfer has been widely used in reality and has received high attention in the field of computer vision. In this paper, we use the cycle generative adversarial network (CycleGAN) to perform style transfer on images. We experiment with the network structure of CycleGAN to convert naturally obtained images into images with a certain style. At the same time, this method does not require the source image and the style image to match each other, thus expanding the application scope. In the experiment, we compare the quality of generated samples using the WGAN, WGAN-GP, LSGAN, and original GAN objective functions. Although generative adversarial networks (GANs) have powerful modeling capabilities, they are difficult to train. The research found that WGAN-GP can stabilize the training process and generate more realistic images, followed by WGAN and LSGAN, while GAN often experiences model collapse phenomenon.