Measures of percent severity of visible symptoms or injuries caused by diseases or insect pests on plant organs are essential in plant health research. Current color thresholding digital imaging-methods are generally more accurate and reliable than visual estimates. However, these methods perform poorly when scene illumination and background are not uniform, conditions that can be overcome by convolutional neural networks (CNN) for semantic segmentation. In this study, we trained five CNN models for pixel level predictions in images of individual leaves exhibiting necrotic lesions and/or yellowing caused by the insect pest coffee leaf miner (CLM), and two fungal diseases: soybean rust (SBR) and wheat tan spot (WTS). Training was performed in 80% of images annotated for three classes: leaf background (B), healthy leaf (H) and injured leaf (I). Precision, recall, and Intersection over Union (IoU) metrics in the test image set were highest for B, followed by H and I classes, irrespective of the model. When the pixel-level predictions were used to estimate percent severity, Feature Pyramid Network (FPN), Unet and DeepLabv3+ (Xception) performed the best: concordance coefficients were greater than 0.95, 0.96 and 0.98 for CLM, SBR and WTS datasets, respectively. The other three models tended to misclassify healthy pixels as injured, leading to overestimation of percent severity. The accuracy of the predictions by CNN models were comparable with those obtained using a standard commercial software which requires manual adjustments that slows the process.