2022
DOI: 10.1007/978-3-031-16437-8_16
|View full text |Cite
|
Sign up to set email alerts
|

SATr: Slice Attention with Transformer for Universal Lesion Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 34 publications
0
11
0
Order By: Relevance
“…Multiorgan/universal [32,36,40,61], partially-label [16,80,84], multi-domain [25,44] and multi-task [7,21] algorithms are emerging, but none of them addresses multi-cancer detection and diagnosis problems. DeepLesion [73] attempts to tackle the universal lesion detection task in CT scans, but their derived lesion detection methods [69,70,72] and several follow-up work [35,41,52,53,71,74] so far have reported mostly moderate multi-class lesion detection performance. Distinguishing between malignant and benign lesions in multi-class tumor setting is still far from a clinical reality.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Multiorgan/universal [32,36,40,61], partially-label [16,80,84], multi-domain [25,44] and multi-task [7,21] algorithms are emerging, but none of them addresses multi-cancer detection and diagnosis problems. DeepLesion [73] attempts to tackle the universal lesion detection task in CT scans, but their derived lesion detection methods [69,70,72] and several follow-up work [35,41,52,53,71,74] so far have reported mostly moderate multi-class lesion detection performance. Distinguishing between malignant and benign lesions in multi-class tumor setting is still far from a clinical reality.…”
Section: Related Workmentioning
confidence: 99%
“…Transformers [57] have advanced the state-of-the-art performance in various computer vision tasks [6,9,19,20,38,55,86], by capturing global interactions between image patches and having no built-in inductive prior. The success of Transformer has also been witnessed in medical image detection [34] and segmentation [8,22,67]. With the recent progress in transformers [6,59], a new variant called mask Transformers has been proposed, where segmentation predictions are represented by a set of query embeddings with their own semantic class labels, generated through the conversion of query embedding to mask embedding vectors followed by multiplying with the image features.…”
Section: Related Workmentioning
confidence: 99%
“…Generally, we record all training samples loss and uncertainty of 2 SOTA ULD methods based on 25%, 50%, and 100% training data, and draw their value or their indexes into 2D dashed figures. We hereby show the results based on [54] in Fig. 3, other results are shown in the supplementary materials.…”
Section: The Relationship Analysis Between Loss and Uncertaintymentioning
confidence: 99%
“…Figure 3. Illustration of loss and uncertainty relationship based on SATr[54]. Yellow (or cyan) points denote the sample whose absolute difference between uncertainty and loss is greater (or less than) than 0.3.…”
mentioning
confidence: 99%
“…[4][5][6] As the 3D lesion extent can be a potential indicator of response to treatment, it is crucial to detect and label the 3D lesions, such that their size can be accurately measured based on current guidelines. Prior approaches [3][4][5][7][8][9][10][11][12][13][14] for 3D lesion detection used the publicly available DeepLesion dataset, 7 but it contains incomplete annotations 5,6,15 and severe class imbalances. 6,15 In this work, we designed a self-training pipeline for 3D lesion detection and tagging.…”
Section: Introductionmentioning
confidence: 99%