Deep Convolutional Neural Networks (DCNNs) have emerged as powerful components of computer-aided systems to detect and classify diseases in medical images. In order to robustly achieve high performance, a large and diverse dataset with annotated images is required to train or fine-tune DCNNs. However, such datasets are often very difficult and/or expensive to obtain due to demands on clinicians’ time and inter-reader variability. To address these challenges and increase training dataset size, we propose an efficient multi-stage algorithm to generate synthetic medical image data by extracting annotated diseased regions and randomly projecting them onto disease-free images. To test the feasibility of this new algorithm, the publicly available Indian Diabetic Retinopathy Image Dataset (IDRiD) is used. This dataset is comprised of the annotated fundus images acquired from 81 patients with two categories of diseases. Among them, 54 and 27 images are used for training and testing images, respectively. Using the proposed algorithm, we generate synthetic data by inserting extracted diseased lesions onto another set of 60 disease-free images, which results in 7,902 and 6,786 images for the two categories of diseases, respectively. Three transfer-learning-based DCNN models (VGG16, ResNet50, and Inception-v3) are trained using original IDRiD images and synthetic dataset, respectively. When applied to the same test images, the model trained with the synthetic dataset outperformed the model trained using he original IDRiD dataset by 7.4% in disease classification. These results indicate that this simple algorithm can generate diverse and useful synthetic lesion data to improve performance of DCNN models.