2022
DOI: 10.3389/fcvm.2022.1016032
|View full text |Cite
|
Sign up to set email alerts
|

Clinician's guide to trustworthy and responsible artificial intelligence in cardiovascular imaging

Abstract: A growing number of artificial intelligence (AI)-based systems are being proposed and developed in cardiology, driven by the increasing need to deal with the vast amount of clinical and imaging data with the ultimate aim of advancing patient care, diagnosis and prognostication. However, there is a critical gap between the development and clinical deployment of AI tools. A key consideration for implementing AI tools into real-life clinical practice is their “trustworthiness” by end-users. Namely, we must ensure… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 114 publications
0
5
0
Order By: Relevance
“…There is also the wider issue of AI trustworthiness which is prevalent within cardiovascular imaging. Our model has some mitigating features as highlighted by Szabo et al (27) such as the inclusion of multicentre data in the initial training and creating results that are explainable. This study also adds to this by demonstrating that it performs consistently in a real-world patient population without any exhibition of bias.…”
Section: Discussionmentioning
confidence: 99%
“…There is also the wider issue of AI trustworthiness which is prevalent within cardiovascular imaging. Our model has some mitigating features as highlighted by Szabo et al (27) such as the inclusion of multicentre data in the initial training and creating results that are explainable. This study also adds to this by demonstrating that it performs consistently in a real-world patient population without any exhibition of bias.…”
Section: Discussionmentioning
confidence: 99%
“…Examples include incorrect self-diagnosis and treatment, delayed seeking of medical help, potential disease transmission, and undermining trust in healthcare professionals and health institutions [1,6,25]. Thus, ensuring the generation of correct, reliable, and credible medical information is of high importance and should be considered by AI model developers, considering the current evidence showing a generation of inaccurate information by these AI-based models [26][27][28]. Additionally, such an approach is recommended in various health domains given the intricacies and peculiarities of each subject (e.g., maxillofacial surgery, dentistry, and pharmacy) [29][30][31][32].…”
Section: Discussionmentioning
confidence: 99%
“…Examples include incorrect self-diagnosis and treatment, delayed seeking of medical help, potential disease transmission, and undermining trust in healthcare professionals and health institutions [1,6,25]. Thus, ensuring the generation of correct, reliable, and credible health information is of high importance and should be considered by AI-models' developers considering the current evidence showing the generation of inaccurate information by these AI-based models [26,27].…”
Section: Discussionmentioning
confidence: 99%