2022
DOI: 10.1101/2022.10.17.22279804
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Screening of normal endoscopic large bowel biopsies with artificial intelligence: a retrospective study

Abstract: Objectives: Develop an interpretable AI algorithm to rule out normal large bowel endoscopic biopsies saving pathologist resources. Design: Retrospective study. Setting: One UK NHS site was used for model training and internal validation. External validation conducted on data from two other NHS sites and one site in Portugal. Participants: 6,591 whole-slides images of endoscopic large bowel biopsies from 3,291 patients (54% Female, 46% Male). Main outcome measures: Area under the receiver operating characterist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 57 publications
(69 reference statements)
0
6
0
Order By: Relevance
“…Providing local xAI is generally easier than global xAI and various local xAI techniques have been developed for AI systems in healthcare (Poceviciute et al 2020, Zhang et al 2022. Saliency maps, a visual display highlighting features that contribute to a prediction (e.g., heat maps), are a natural fit with AI systems that make predictions from image data and are also intuitive (Guidotti et al 2018, Machlev et al 2022, Graham et al 2022. Couture et al (2018) tested a saliency map technique in an AI system for detecting breast cancer tissue.…”
Section: Understanding Accountability As a Constraint And A Resource ...mentioning
confidence: 99%
See 1 more Smart Citation
“…Providing local xAI is generally easier than global xAI and various local xAI techniques have been developed for AI systems in healthcare (Poceviciute et al 2020, Zhang et al 2022. Saliency maps, a visual display highlighting features that contribute to a prediction (e.g., heat maps), are a natural fit with AI systems that make predictions from image data and are also intuitive (Guidotti et al 2018, Machlev et al 2022, Graham et al 2022. Couture et al (2018) tested a saliency map technique in an AI system for detecting breast cancer tissue.…”
Section: Understanding Accountability As a Constraint And A Resource ...mentioning
confidence: 99%
“…This could also be a solution to the problem of providing progressively richer and recipient designed explanations of behaviour (Button & Dourish 1996) and go some way to providing a degree of natural accountability in AI systems. For AI systems designed to assist in the diagnosis of medical images, a more general recommendation would be that local account design should be informed by the practices that healthcare professionals use to interpret images, i.e., clinically meaningful features (Graham et al 2022, Nix et al 2022) and how underlying physical structures manifest themselves. In mammography, this would suggest that local accounts also be sensitive to the 'geographies of suspicion', but also how radiologists' make use of their understanding of the physics of mammograms and knowledge of breast architecture.…”
Section: Understanding Accountability As a Constraint And A Resource ...mentioning
confidence: 99%
“…34 Another colon biopsy screening tool, aimed at separating colonic biopsies into normal and abnormal cases, did not detect several entities that human pathologists would expect to, such as signet ring cells, giant cells, mitotic figures, and spirochaetosis. 35 This means that although such algorithm results are promising, to be used safely pathologists need to understand their limitations and delineate further work required to improve performance.…”
Section: Missing Data Leading To Bias and Hidden Stratificationmentioning
confidence: 99%
“…On the other hand, there is a growing interest in harnessing graph-based learning techniques to analyze the association between the TME and disease without the need for explicit hypotheses in a data-driven manner [21,22]. Numerous recent studies have reported promising results by leveraging graph neural networks (GNNs) to model the TME and predict the presence [23,24], grade [25,26], stage [27], subtype [28,29], and prognosis [30][31][32][33][34] of diverse types of cancers. Yet, the limited size, biological heterogeneity, and differences in staining and imaging protocols of clinical datasets constrain the quality of modern machine learning models by impeding their generalization capacity, particularly in crossstudy scenarios that are often overlooked [35,36].…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, the interpretability of deep learning models, in general, remains a challenge [37, 38]. Many of these studies do not discuss explanation in their works [25, 26, 28, 29, 32, 33], some employ gradient-based [23, 27, 34] and permutation-based [24] methods to generate an importance heatmap as the models’ explanation, and others conduct post-hoc analyses on learned graph features to obtain some clinical implications from their models [30, 31]. These explanations, although providing some insights into learned information, might not be clear, intuitive, or meaningful, especially for clinical experts and cancer researchers.…”
Section: Introductionmentioning
confidence: 99%