Significant bodies of research have explored the topics of computer vision and image quality, but research on the intersection of these two disciplines remains limited. Additionally, evidence suggests that image quality as determined by the human visual system may differ from image quality as determined by the performance of computer vision algorithms. Furthermore, most of the research on the relationships between image quality and computer vision performance has focused on single-label image classification and has not considered tasks such as semantic segmentation or object detection. Here, we consider the relationship between three primary image quality factors–resolution, blur, and noise–and the performance of deep-learning based object detection models. To do so, we examine the impacts of these image quality variables on mean average precision (mAP) of object detection models, evaluating the performance of models trained on only high-quality images as well as models fine tuned on lower quality images. Additionally, we map our primary image quality variables to the terms used in the General Image Quality Equation (GIQE)–namely ground sample distance (GSD), relative edge response (RER), and signal to noise ratio (SNR)–and assess the suitability of the GIQE functional form for modeling object detector performance in the presence of significant image distortions.