2022
DOI: 10.3390/diagnostics12071549
|View full text |Cite
|
Sign up to set email alerts
|

Transformers Improve Breast Cancer Diagnosis from Unregistered Multi-View Mammograms

Abstract: Deep convolutional neural networks (CNNs) have been widely used in various medical imaging tasks. However, due to the intrinsic locality of convolution operations, CNNs generally cannot model long-range dependencies well, which are important for accurately identifying or mapping corresponding breast lesion features computed from unregistered multiple mammograms. This motivated us to leverage the architecture of Multi-view Vision Transformers to capture long-range relationships of multiple mammograms from the s… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(23 citation statements)
references
References 36 publications
2
21
0
Order By: Relevance
“…Their approach combines a residual convolutional network with a transformer encoder that incorporates multiple layer perceptron (MLP) modules. Chen et al [30] introduced multi-view vision transformers (MVTs) for diagnosing breast cancer from unregistered multi-view mammograms. The MVTs used local and global transformer blocks to capture within-mammogram and inter-mammogram dependencies, and it was capable of processing four-view mammograms simultaneously.…”
Section: Transformermentioning
confidence: 99%
See 1 more Smart Citation
“…Their approach combines a residual convolutional network with a transformer encoder that incorporates multiple layer perceptron (MLP) modules. Chen et al [30] introduced multi-view vision transformers (MVTs) for diagnosing breast cancer from unregistered multi-view mammograms. The MVTs used local and global transformer blocks to capture within-mammogram and inter-mammogram dependencies, and it was capable of processing four-view mammograms simultaneously.…”
Section: Transformermentioning
confidence: 99%
“…Chen et al. [30] introduced multi‐view vision transformers (MVTs) for diagnosing breast cancer from unregistered multi‐view mammograms. The MVTs used local and global transformer blocks to capture within‐mammogram and inter‐mammogram dependencies, and it was capable of processing four‐view mammograms simultaneously.…”
Section: Related Wordmentioning
confidence: 99%
“…The suggested model outperformed CNN models of DenseNet201, ResNet101, and VGG19 by achieving 96.29% precision, a 96.15% F1-score, and 95.29% accuracy. Chen et al [42] suggested using the local and global transformer blocks to model within four mammograms taken from both views for each side. The four images were then combined into a single sequence global transformer and passed into the MLP head for classification.…”
Section: Vision Transformer-based Medical Image Classificationmentioning
confidence: 99%
“…Currently, the literature using transformer-based networks for mammography analysis is still limited, especially in the case of whole image classification. Very recently, Chen et al [ 36 ] proposed a transformer-based method to classify multi-view mammograms, that achieve an AUC of on a dataset consisting of 3796 images, surpassing the state-of-the-art multi-view CNN model. Simultaneously with our research, Swin transformer has been tested in single-view mammography classification obtaining an AUC of on DDSM dataset [ 37 ].…”
Section: Introductionmentioning
confidence: 99%