One of the most important developments in the field of deep learning is the generative adversarial network(GAN) models. These models, known as GAN, are the most modern approaches used in image editing, image/cartoon painting, high resolution super image acquisition, and the transfer of the texture/pattern in another image to another image. In this study, the performances of GAN models (cGAN, DCGAN, InfoGAN, SGAN, ACGAN, WGAN-GP, LSGAN), which are commonly used in the literature, in producing synthetic images very similar to the real images were investigated. The originality of the study is the development of a hybrid GAN model (cDCGAN) that incorporates the advantages of cGAN and DCGAN and evaluates the performances of GAN methods in comparison with deep learning based convolutional neural networks (CNN). Synthetic images similar to the images in the data sets were generated with the encoded models. Fréchet inception distance (FID) metric and CNN were used to calculate the similarity of the produced synthetic images to the existing images so as to evaluate the model performance. In the experimental studies, time-based image production performances of all models were evaluated. As a result, it was observed that the images produced by the LSGAN model provide a high classification performance rate, but with DCGAN and WGANGP, it produces clearer noise images.