Image compression is all about reducing storage costs and making the transmission of huge image files feasible. This paper targets lossy image compression by estimating the most important Discrete Cosine Transform (DCT) coefficients through employing deep learning. DCT basically results in the transformation of an image to the frequency domain from the spatial domain. The first few coefficients, in the frequency domain of a transformed image, have great importance. They are the most informative, while the others are of the least importance. The abilities of Multi-Layer Perceptron (MLP) and Convolutional neural network (CNN) were exploited in order to find a reasonable estimate of important DCT coefficients. The target was to get a deep neural network (DNN) for the compression of digital images that has a reduced number of DCT coefficients that is; higher compression rate and better image quality upon reconstruction and improved generalization ability. To shorten the encoding-decoding time and to fasten the training of our deep neural networks, RELUs and Tangent Sigmoid were used. Experiments performed on a large set of grayscale images shows that only 15 out of 64 total available DCT coefficients result in more than 70% image quality and have a good compression ratio of 4:1. Moreover, the quality of images upon a subjective and objective evaluation of unseen data proves that our proposed MLP achieved better generalization as compared to CNN.