The histological distinction of lung neuroendocrine carcinoma, including small cell lung carcinoma (SCLC), large cell neuroendocrine carcinoma (LCNEC) and atypical carcinoid (AC), can be challenging in some cases, while bearing prognostic and therapeutic significance. To assist pathologists with the differentiation of histologic subtyping, we applied a deep learning classifier equipped with a convolutional neural network (CNN) to recognize lung neuroendocrine neoplasms. Slides of primary lung SCLC, LCNEC and AC were obtained from the Laboratory of Clinical and Experimental Pathology (University Hospital Nice, France). Three thoracic pathologists blindly established gold standard diagnoses. The HALO-AI module (Indica Labs, UK) trained with 18,752 image tiles extracted from 60 slides (SCLC = 20, LCNEC = 20, AC = 20 cases) was then tested on 90 slides (SCLC = 26, LCNEC = 22, AC = 13 and combined SCLC with LCNEC = 4 cases; NSCLC = 25 cases) by F1-score and accuracy. A HALO-AI correct area distribution (AD) cutoff of 50% or more was required to credit the CNN with the correct diagnosis. The tumor maps were false colored and displayed side by side to original hematoxylin and eosin slides with superimposed pathologist annotations. The trained HALO-AI yielded a mean F1-score of 0.99 (95% CI, 0.939–0.999) on the testing set. Our CNN model, providing further larger validation, has the potential to work side by side with the pathologist to accurately differentiate between the different lung neuroendocrine carcinoma in challenging cases.
With the growing standardization of Whole Slide Images (WSIs), deep learning algorithms have shown promising results for the automated classification and localization of tumors. Yet, it is often difficult to train such algorithms, as they usually require careful detailed annotations from expert pathologists, which are tedious to produce. This is why in general only slide-level labels are accessible while annotations of small regions (or tiles) are limited. With only slide-level information, it is difficult to obtain accurate predictions of the localization of pathological tissues inside a slide, despite reaching good slide-level classification. Besides, existing algorithms show limited consistency between slide- and tile-level predictions, leading to difficult interpretation in case of healthy tissue. Using the attention-based multiple instance learning framework, we propose to combine slide-level labels on all slides with tile-level labels on a small fraction (e.g. 20%) of slides within a histology dataset to improve both classification and localization performances. With this mixed supervision of slides, we aim to enforce a better consistency between slide- and tile-level labels. To this end, we introduce an attention-based loss function to further guide the model’s attention on discriminative regions inside tumorous slides, and display an equal attention among all tiles of normal slides. On the Camelyon16 dataset, we reached precision and recall scores as high as 0.99 and 0.85 respectively with an AUC of 0.93 on the competition test set, using only 50% of the slides with tile-level annotations in the training set. Experiments using various proportions of fully annotated slides in the training set show promising results for an improved localization of tumors and classification of slides. In this work, we showed that using a limited amount of fully annotated slides we can improve both the classification and localization performances of an attention-based deep learning model. This increased consistency and performances should help pathologists to better interpret the algorithm output and to focus on suspicious regions in probable tumorous slides.
Citation Format: Paul Tourniaire, Marius Ilie, Paul Hofman, Nicholas Ayache, Hervé Delingette. Mixed supervision to improve the classification and localization: Coherence of tumors in histological slides [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 461.
Purpose: To assess the feasibility of a three-dimensional deep convolutional neural network (3D-CNN) for the general triage of whole-body FDG PET in daily clinical practice. Methods: An institutional clinical data warehouse working environment was devoted to this PET imaging purpose. Dedicated request procedures and data processing workflows were specifically developed within this infrastructure and applied retrospectively to a monocentric dataset as a proof of concept. A custom-made 3D-CNN was first trained and tested on an “unambiguous” well-balanced data sample, which included strictly normal and highly pathological scans. For the training phase, 90% of the data sample was used (learning set: 80%; validation set: 20%, 5-fold cross validation) and the remaining 10% constituted the test set. Finally, the model was applied to a “real-life” test set which included any scans taken. Text mining of the PET reports systematically combined with visual rechecking by an experienced reader served as the standard-of-truth for PET labeling. Results: From 8125 scans, 4963 PETs had processable cross-matched medical reports. For the “unambiguous” dataset (1084 PETs), the 3D-CNN’s overall results for sensitivity, specificity, positive and negative predictive values and likelihood ratios were 84%, 98%, 98%, 85%, 42.0 and 0.16, respectively (F1 score of 90%). When applied to the “real-life” dataset (4963 PETs), the sensitivity, NPV, LR+, LR− and F1 score substantially decreased (61%, 40%, 2.97, 0.49 and 73%, respectively), whereas the specificity and PPV remained high (79% and 90%). Conclusion: An AI-based triage of whole-body FDG PET is promising. Further studies are needed to overcome the challenges presented by the imperfection of real-life PET data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.