Medical Imaging 2022: Image Processing 2022
DOI: 10.1117/12.2611177
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating transformer-based semantic segmentation networks for pathological image segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 4 publications
0
8
0
Order By: Relevance
“…Similarly in medical imaging, recent years have seen emergence of many vision transformers models replacing CNNs. Such models have demonstrated enhanced global context, leading to both qualitative and quantitative improvements for histopathology [57] and brain tumor segmentation [58], [59], tumor classification [60] and polyp detection [61]. Pure (endto-end) transformers have even shown advantages over CNNs in the low-data regime for brain segmentation [62].…”
Section: B Attentionmentioning
confidence: 99%
“…Similarly in medical imaging, recent years have seen emergence of many vision transformers models replacing CNNs. Such models have demonstrated enhanced global context, leading to both qualitative and quantitative improvements for histopathology [57] and brain tumor segmentation [58], [59], tumor classification [60] and polyp detection [61]. Pure (endto-end) transformers have even shown advantages over CNNs in the low-data regime for brain segmentation [62].…”
Section: B Attentionmentioning
confidence: 99%
“…The authors tested their framework in different computational pathology problems. Using high-resolution images, Nguyen et al [130] performed a comparative analysis using architectures based on CNN and Transformer modules and reported improved results with the Transformer-based approaches. In a similar approach, using hyperspectral image data, Yun et al [131] proposed an encoder-decoder architecture with CNN and Transformer blocks to extract and model spatial and spectral features, achieving competitive results against other methodologies.…”
Section: ) Breast Lesion Segmentationmentioning
confidence: 99%
“…Transformers originated from and are most publicized in the NLP domain (e.g., BERT [29], GPT-3 [30], T5 [31]), including notable examples in the medical domain [32][33][34][35], and have begun to demonstrate state-of-the-art performance in general imaging applications [36][37][38] including computational pathology [39][40][41]. While the original transformer used an encoder-decoder architecture due to its primary task of machine translation [28], many modern transformers for classification tasks (including ours) use an encoder-only architecture.…”
Section: Introductionmentioning
confidence: 99%
“…The past two years have seen a surge in popularity of transformer modeling for common computational pathology tasks such as WSI segmentation [40,43,44] and histology image classification [41,[45][46][47]. Transformers have also been used for pathologist-level question-answering from histological imaging [39], predicting pathologists' visual attention [48], and for pathology text mining [49].…”
Section: Introductionmentioning
confidence: 99%