2019
DOI: 10.1007/978-3-030-32226-7_22
|View full text |Cite
|
Sign up to set email alerts
|

MULAN: Multitask Universal Lesion Analysis Network for Joint Lesion Detection, Tagging, and Segmentation

Abstract: When reading medical images such as a computed tomography (CT) scan, radiologists generally search across the image to find lesions, characterize and measure them, and then describe them in the radiological report. To automate this process, we propose a multitask universal lesion analysis network (MULAN) for joint detection, tagging, and segmentation of lesions in a variety of body parts, which greatly extends existing work of single-task lesion analysis on specific body parts. MULAN is based on an improved Ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
135
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 88 publications
(137 citation statements)
references
References 17 publications
(68 reference statements)
2
135
0
Order By: Relevance
“…Table 1 presents the comparisons with the previous state-of-the-art (SOTA) methods. Our model surpasses all the SOTA methods on sensitivities at different FPs and MAP@0.5, which includes 3DCE [1], MSB [2], RetinaNet [3], MVP-Net [4] and MULAN [5].…”
Section: Methodsmentioning
confidence: 92%
See 2 more Smart Citations
“…Table 1 presents the comparisons with the previous state-of-the-art (SOTA) methods. Our model surpasses all the SOTA methods on sensitivities at different FPs and MAP@0.5, which includes 3DCE [1], MSB [2], RetinaNet [3], MVP-Net [4] and MULAN [5].…”
Section: Methodsmentioning
confidence: 92%
“…Supervised pre-training from natural images has proven to be an effective way for 2D medical image transfer learning [1][2][3][4][5]10]. This indicates that using supervised pre-training models from another domain can actually benefit the medical image analysis application.…”
Section: Supervised 3d Pre-training With Coco Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…As shown in Table 1, our method brings promising detection performance improvements for all baselines. The improvements of Faster R-CNN [27], 9-slice 3DCE, and MVP-Net are more pronounced than those of MULAN w/o SRL [8] and AlignShift [9]. This is because MULAN and AlignShift introduce extra weakly segmentation mask generated from radiologist-annotated RECIST labels.…”
Section: Lesion Detection Performancementioning
confidence: 98%
“…An illustrative example is, when translating an adult X-ray into a pediatric X-ray, there is no guarantee that fine-grained disease content on the original image will be explicitly transferred. The capability of preserving class-specific semantic context across domains is crucial for medical imaging analysis for certain clinically relevant tasks, such as disease or lesion classification, detection and segmentation [14,11,9,17]. However, to our best knowledge, solutions to this problem of adversarial adaptation for medical imaging are limited.…”
Section: Introductionmentioning
confidence: 99%