Retinopathy of Prematurity (ROP) is a disease affecting infants born preterm, at birth their retina is not well developed and in most times after birth the veins of the retina do not develop to full term. Sometimes these veins stop growing and then suddenly start growing to the wrong directions and this abnormally causes retina traction, causing blindness. Each country has its own screening guidelines for the disease diagnosis. The disease can be categorized as severe or mild and has five stages. Stage one and two is not severe and can develop and heal unnoticed. Stage three should be diagnosed because it is reversable through treatment but when the disease progresses to Stage four, retina traction occurs causing blindness at stage five. The emergent of digital imaging support has resulted to having hospitals capturing retina images to determine the presence or absence of severe ROP. These images can be used to determine the presence of retinal detachment or lack of growth of the veins. ROP disease diagnosis is expensive with few eye specialists available in hospitals and the process of capturing retina images by non-eye specialists and transmitting them to specialists for diagnosis pauses many issues. Different cameras produce images of different contrast, image transmission may cause quality reduction depending on the channel of transmission. These challenges call for the development of systems to support both image quality assessment and assistive disease diagnosis. This paper proposes a Deep learning model to assist ophthalmologists to determine the presence or absence of the disease as well as diagnosing the disease at stage three. A customized ResNet-50 model was first applied to preprocess the images and separate images of quality from non-quality images. Ninety-one (91) images were obtained from Kaggle database and 11,100 images from HVDROPDB database which were used for model training, testing and validation at a ratio of 0.80 training, 0.20 testing, 0.20 validation. After preprocessing, desired features were extracted and fed into a Deep Neural Network to quickly classify them as either having the disease or not. Those with ROP were further divided into two sub-classes: ROP Stage two or three. The model was able to achieve an accuracy of 92.8%, sensitivity of 94.9%, and precision of 97.3%.