Infrared thermographs (IRTs) have been used for fever screening during infectious disease epidemics, including SARS, EVD and COVID-19. Although IRTs have significant potential for human body temperature measurement, the literature indicates inconsistent diagnostic performance, possibly due to wide variations in implemented methodology. A standardized method for IRT fever screening was recently published, but there is a lack of clinical data demonstrating its impact on IRT performance. We have performed a clinical study of 596 subjects to assess the diagnostic effectiveness of standardized IRT-based fever screening and evaluate the effect of facial measurement location. Temperatures from 17 facial locations were extracted from thermal images and compared with oral thermometry. Statistical analyses included calculation of receiver operating characteristic curves and area under the curve (AUC) values for detection of febrile subjects. Pearson correlation coefficients for IRT- based and reference temperatures were found to vary strongly with measurement location. Approaches based on maximum temperatures in either inner canthi or full-face regions indicated stronger discrimination ability than maximum forehead temperature (AUC values of 0.95-0.97 vs. 0.86-0.87, respectively) and other specific facial locations. These values are markedly better than the vast majority of results from in prior human studies of IRT- based fever screening. Thus, our findings provide clinical confirmation of the utility of consensus approaches for fever screening, including the use of inner canthi temperatures, while also indicating that full-face maximum temperatures may provide an effective alternate approach.