2022
DOI: 10.1016/j.media.2021.102274
|View full text |Cite
|
Sign up to set email alerts
|

Does your dermatology classifier know what it doesn’t know? Detecting the long-tail of unseen conditions

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(21 citation statements)
references
References 6 publications
0
21
0
Order By: Relevance
“…Approaches may build on previous work that combines language models and knowledge graphs 25,26 to reason step-by-step about surgical tasks. Additionally, GMAI deployed in surgical settings will probably face unusual clinical phenomena that cannot be included during model development, owing to their rarity, a challenge known as the long tail of unseen conditions 27 . Medical reasoning abilities will be crucial for both detecting previously unseen outliers and explaining them, as exemplified in Fig.…”
Section: Grounded Radiology Reports Gmai Enables a New Generation Of ...mentioning
confidence: 99%
“…Approaches may build on previous work that combines language models and knowledge graphs 25,26 to reason step-by-step about surgical tasks. Additionally, GMAI deployed in surgical settings will probably face unusual clinical phenomena that cannot be included during model development, owing to their rarity, a challenge known as the long tail of unseen conditions 27 . Medical reasoning abilities will be crucial for both detecting previously unseen outliers and explaining them, as exemplified in Fig.…”
Section: Grounded Radiology Reports Gmai Enables a New Generation Of ...mentioning
confidence: 99%
“…Promisingly, ISIC provides a template of the large, open‐access datasets that have driven improvement of ML in other fields, 22,49–51 and it continues to increase in size. Technological advances may also help the robustness of ML model performance, such as abstaining from predictions in rare or unknown disease classes 42 …”
Section: Discussionmentioning
confidence: 99%
“…Thirdly, although we used a low threshold for considering a journal ‘medical’, some studies could be missed from non‐medical journal sources. However, in studies identified in non‐medical journals that considered rare disease classes, the focus was on methodological modifications rather than clinical performance 42,53,54 . Finally, some studies may have reported the number of training images used after performing data augmentation, and thus, image counts may be inaccurate.…”
Section: Discussionmentioning
confidence: 99%
“…Describe if images with classes that are OOD were included in the study test set, and report findings. 45 If images with OOD classes were not assessed, explain the drawbacks to clinical application (ie, undefined behavior when presented with classes outside of those studied). In some cases, OOD data may be subtle-for example, beyond classes not represented in training data, OOD may include unique combinations of other characteristics, such as clinical site, camera used, lighting, and patient demographics, of which some combinations may be underrepresented in algorithm training data.…”
Section: Meaningmentioning
confidence: 99%
“…For example, if an algorithm is trained to differentiate nevi vs melanomas, any image showing a diagnosis outside of nevi and melanomas would be OOD. Describe if images with classes that are OOD were included in the study test set, and report findings . If images with OOD classes were not assessed, explain the drawbacks to clinical application (ie, undefined behavior when presented with classes outside of those studied).…”
Section: Recommendationsmentioning
confidence: 99%