Objectives: Develop an interpretable AI algorithm to rule out normal large bowel endoscopic biopsies saving pathologist resources. Design: Retrospective study. Setting: One UK NHS site was used for model training and internal validation. External validation conducted on data from two other NHS sites and one site in Portugal. Participants: 6,591 whole-slides images of endoscopic large bowel biopsies from 3,291 patients (54% Female, 46% Male). Main outcome measures: Area under the receiver operating characteristic and precision recall curves (AUC-ROC and AUC-PR), measuring agreement between consensus pathologist diagnosis and AI generated classification of normal versus abnormal biopsies. Results: A graph neural network was developed incorporating pathologist domain knowledge to classify the biopsies as normal or abnormal using clinically driven interpretable features. Model training and internal validation were performed on 5,054 whole slide images of 2,080 patients from a single NHS site resulting in an AUC-ROC of 0.98 (SD=0.004) and AUC-PR of 0.98 (SD=0.003). The predictive performance of the model was consistent in testing over 1,537 whole slide images of 1,211 patients from three independent external datasets with mean AUC-ROC = 0.97 (SD=0.007) and AUC-PR = 0.97 (SD=0.005). Our analysis shows that at a high sensitivity threshold of 99%, the proposed model can, on average, reduce the number of normal slides to be reviewed by a pathologist by 55%. A key advantage of IGUANA is its ability to provide an explainable output highlighting potential abnormalities in a whole slide image as a heatmap overlay in addition to numerical values associating model prediction with various histological features. Example results with interpretable features can be viewed online at https://iguana.dcs.warwick.ac.uk/. Conclusions: An interpretable AI model was developed to screen abnormal cases for review by pathologists. The model achieved consistently high predictive accuracy on independent cohorts showing its potential in optimising increasingly scarce pathologist resources and for achieving faster time to diagnosis. Explainable predictions of IGUANA can guide pathologists in their diagnostic decision making and help boost their confidence in the algorithm, paving the way for future clinical adoption.