2024
DOI: 10.1016/j.eclinm.2024.102479
|View full text |Cite
|
Sign up to set email alerts
|

Health equity assessment of machine learning performance (HEAL): a framework and dermatology AI model case study

Mike Schaekermann,
Terry Spitz,
Malcolm Pyles
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 44 publications
0
0
0
Order By: Relevance
“…Additionally, Liu and Primiero (2023) systematic review predominantly consisted of papers with participants of East Asian origin with some studies containing only 10% of participants with FST type IV–VI. Schakermann et al (2024) study developed the Health Equity Assessment of machine Learning (HEAL) framework to assess the performance of health AI in a case study. While Schakermann et al (2024) case was carefully sampled to create a balance in demographics, there was still a poor representation of FST V-VI and American Indian/Alaska Natives.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, Liu and Primiero (2023) systematic review predominantly consisted of papers with participants of East Asian origin with some studies containing only 10% of participants with FST type IV–VI. Schakermann et al (2024) study developed the Health Equity Assessment of machine Learning (HEAL) framework to assess the performance of health AI in a case study. While Schakermann et al (2024) case was carefully sampled to create a balance in demographics, there was still a poor representation of FST V-VI and American Indian/Alaska Natives.…”
Section: Resultsmentioning
confidence: 99%
“… Schakermann et al (2024) study developed the Health Equity Assessment of machine Learning (HEAL) framework to assess the performance of health AI in a case study. While Schakermann et al (2024) case was carefully sampled to create a balance in demographics, there was still a poor representation of FST V-VI and American Indian/Alaska Natives. These studies’ results are skewed due to poor representation of POC affecting the results generalisability or show the struggle in trying to work with balanced data sets due to limited resources.…”
Section: Resultsmentioning
confidence: 99%
“…Algorithms developed by the International Skin Imaging Collaboration (ISIC) can match the expertise of professional dermatologists in simulated tests 41,42 . However, data biases in AI training models can affect the universality across different ethnicities and socio-economic populations, leading to algorithmic biases and health disparities to some extent [43][44][45][46] . For example, the diagnostic accuracy of ISIC does not include individuals with Fitzpatrick phototype III or higher 47 .…”
Section: Discussionmentioning
confidence: 99%