2022
DOI: 10.1002/mp.15852
|View full text |Cite
|
Sign up to set email alerts
|

A VGG attention vision transformer network for benign and malignant classification of breast ultrasound images

Abstract: Purpose Breast cancer is the most commonly occurring cancer worldwide. The ultrasound reflectivity imaging technique can be used to obtain breast ultrasound (BUS) images, which can be used to classify benign and malignant tumors. However, the classification is subjective and dependent on the experience and skill of operators and doctors. The automatic classification method can assist doctors and improve the objectivity, but current convolution neural network (CNN) is not good at learning global features and vi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(13 citation statements)
references
References 38 publications
0
9
0
Order By: Relevance
“…38 This dataset had 980 samples obtained using an iU22 xMA-TRIX scanner (Philips, USA) or a LOGIQ E9 scanner (GE, USA), which included 595 malignant samples and 385 benign samples. 39 The other dataset (dataset B)…”
Section: Datasetmentioning
confidence: 99%
“…38 This dataset had 980 samples obtained using an iU22 xMA-TRIX scanner (Philips, USA) or a LOGIQ E9 scanner (GE, USA), which included 595 malignant samples and 385 benign samples. 39 The other dataset (dataset B)…”
Section: Datasetmentioning
confidence: 99%
“…Nevertheless, a recent tendency is to classify lesions in BI-RADS assessment categories, which relates the outcome to a malignancy risk and is aligned with the BI-RADS-based reports. In this context, the BUS-BRA dataset can be used to develop and evaluate CAD systems for classifying pathology classes 66,67 and BI-RADS categories. 12,13 It is worth mentioning that the main limitation of BI-RADS annotations is that they are human-dependent and biased toward the specialist's expertise, becoming critical when having a single specialist, as in our project herein.…”
Section: Potential Applicationsmentioning
confidence: 99%
“…Recently, a set of researchers used such a technique to recognize benign from malignant cases, i.e., Gheflati et al examined the performance of pure and hybrid pre-trained vision transformer models based on two breast ultrasound datasets [ 28 ], demonstrating the importance of involving the Vision Transformer technique for automatically detecting breast masses in ultrasonography. Another work used a CNN module to extract local features while a ViT module was employed to identify the global features among several areas and improve the relevant local features [ 29 ]. The hybrid model achieved a high precision of 90.77%, recall of 90.73%, specificity of 85.58%, and F1 score of 90.73%.…”
Section: Related Workmentioning
confidence: 99%