Artificial intelligence (AI) can predict the presence of molecular alterations directly from routine histopathology slides. However, training robust AI systems requires large datasets for which data collection faces practical, ethical and legal obstacles. These obstacles could be overcome with swarm learning (SL), in which partners jointly train AI models while avoiding data transfer and monopolistic data governance. Here, we demonstrate the successful use of SL in large, multicentric datasets of gigapixel histopathology images from over 5,000 patients. We show that AI models trained using SL can predict BRAF mutational status and microsatellite instability directly from hematoxylin and eosin (H&E)-stained pathology slides of colorectal cancer. We trained AI models on three patient cohorts from Northern Ireland, Germany and the United States, and validated the prediction performance in two independent datasets from the United Kingdom. Our data show that SL-trained AI models outperform most locally trained models, and perform on par with models that are trained on the merged datasets. In addition, we show that SL-based AI models are data efficient. In the future, SL can be used to train distributed AI models for any histopathology image analysis task, eliminating the need for data transfer.
In the last four years, advances in Deep Learning technology have enabled the inference of selected mutational alterations directly from routine histopathology slides. In particular, recent studies have shown that genetic changes in clinically relevant driver genes are reflected in the histological phenotype of solid tumors and can be inferred by analysing routine Haematoxylin and Eosin (H&E) stained tissue sections with Deep Learning. However, these studies mostly focused on selected individual genes in selected tumor types. In addition, genetic changes in solid tumors primarily act by changing signaling pathways that regulate cell behaviour. In this study, we hypothesized that Deep Learning networks can be trained to directly predict alterations of genes and pathways across a spectrum of solid tumors. We manually outlined tumor tissue in H&E-stained tissue sections from 7,829 patients with 23 different tumor types from The Cancer Genome Atlas. We then trained convolutional neural networks in an end-to-end way to detect alterations in the most clinically relevant pathways or genes, directly from histology images. Using this automatic approach, we found that alterations in 12 out of 14 clinically relevant pathways and numerous single gene alterations appear to be detectable in tissue sections, many of which have not been reported before. Interestingly, we show that the prediction performance for single gene alterations is better than that for pathway alterations. Collectively, these data demonstrate the predictability of genetic alterations directly from routine cancer histology images and show that individual genes leave a stronger morphological signature than genetic pathways.
Artificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use. Here, we show that convolutional neural networks (CNNs) are highly susceptible to white- and black-box adversarial attacks in clinically relevant weakly-supervised classification tasks. Adversarially robust training and dual batch normalization (DBN) are possible mitigation strategies but require precise knowledge of the type of attack used in the inference. We demonstrate that vision transformers (ViTs) perform equally well compared to CNNs at baseline, but are orders of magnitude more robust to white- and black-box attacks. At a mechanistic level, we show that this is associated with a more robust latent representation of clinically relevant categories in ViTs compared to CNNs. Our results are in line with previous theoretical studies and provide empirical evidence that ViTs are robust learners in computational pathology. This implies that large-scale rollout of AI models in computational pathology should rely on ViTs rather than CNN-based classifiers to provide inherent protection against perturbation of the input data, especially adversarial attacks.
The histopathological phenotype of tumors reflects the underlying genetic makeup. Deep learning can predict genetic alterations from pathology slides, but it is unclear how well these predictions generalize to external datasets. We performed a systematic study on Deep-Learning-based prediction of genetic alterations from histology, using two large datasets of multiple tumor types. We show that an analysis pipeline that integrates self-supervised feature extraction and attention-based multiple instance learning achieves a robust predictability and generalizability.
The histopathological phenotype of tumors reflects the underlying genetic makeup. Deep learning can predict genetic alterations from tissue morphology, but it is unclear how well these predictions generalize to external datasets. Here, we present a deep learning pipeline based on self-supervised feature extraction which achieves a robust predictability of genetic alterations in two large multicentric datasets of seven tumor types.
The interpretation of digitized histopathology images has been transformed thanks to artificial intelligence (AI). End-to-end AI algorithms can infer high-level features directly from raw image data, extending the capabilities of human experts. In particular, AI can predict tumor subtypes, genetic mutations and gene expression directly from hematoxylin and eosin (H&E) stained pathology slides. However, existing end-to-end AI workflows are poorly standardized and not easily adaptable to new tasks. Here, we introduce DeepMed, a Python library for predicting any high-level attribute directly from histopathological whole slide images alone, or from images coupled with additional meta-data (https://github.com/KatherLab/deepmed). Unlike earlier computational pipelines, DeepMed is highly developer-friendly: its structure is modular and separates preprocessing, training, deployment, statistics, and visualization in such a way that any one of these processes can be altered without affecting the others. Also, DeepMed scales easily from local use on laptop computers to multi-GPU clusters in cloud computing services and therefore can be used for teaching, prototyping and for large-scale applications. Finally, DeepMed is user-friendly and allows researchers to easily test multiple hypotheses in a single dataset (via cross-validation) or in multiple datasets (via external validation). Here, we demonstrate and document DeepMed's abilities to predict molecular alterations, histopathological subtypes and molecular features from routine histopathology images, using a large benchmark dataset which we release publicly. In summary, DeepMed is a fully integrated and broadly applicable end-to-end AI pipeline for the biomedical research community.
Background and Aims One of the most important complications of heart transplantation is organ rejection, which is diagnosed on endomyocardial biopsies by pathologists. Computer-based systems could assist in the diagnostic process and potentially improve reproducibility. Here, we evaluated the feasibility of using deep learning in predicting the degree of cellular rejection from pathology slides as defined by the International Society for Heart and Lung Transplantation (ISHLT) grading system. Methods We collected 1079 histopathology slides from 325 patients from three transplant centers in Germany. We trained an attention-based deep neural network to predict rejection in the primary cohort and evaluated its performance using cross validation and by deploying it to three cohorts. Results For binary prediction (rejection yes/no) the mean Area Under the Receiver Operating Curve (AUROC) was 0.849 in the cross-validated experiment and 0.734, 0.729 and 0.716 in external validation cohorts. For a prediction of the ISHLT grade (0R, 1R, 2/3R), AUROCs were 0.835, 0.633 and 0.905 in the cross-validated experiment and 0.764, 0.597, 0.913, and 0.631, 0.633, 0.682, and 0.722, 0.601, 0.805 in the validation cohorts, respectively. The predictions of the AI model were interpretable by human experts and highlighted plausible morphological patterns. Conclusions We conclude that artificial intelligence can detect patterns of cellular transplant rejection in routine pathology, even when trained on small cohorts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.