2021
DOI: 10.1002/mp.15341
|View full text |Cite
|
Sign up to set email alerts
|

FMRNet: A fused network of multiple tumoral regions for breast tumor classification with ultrasound images

Abstract: Purpose Recent studies have illustrated that the peritumoral regions of medical images have value for clinical diagnosis. However, the existing approaches using peritumoral regions mainly focus on the diagnostic capability of the single region and ignore the advantages of effectively fusing the intratumoral and peritumoral regions. In addition, these methods need accurate segmentation masks in the testing stage, which are tedious and inconvenient in clinical applications. To address these issues, we construct … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(16 citation statements)
references
References 43 publications
0
16
0
Order By: Relevance
“…So, the fused features obtained from both sub-band images may be the reason for good classification performance. The proposed framework has not used any segmentation techniques and does not even require ground truth of segmented tumor which was used in other state-of-art methods [6]- [9]. This makes the proposed work more automated and easily implementable.…”
Section: Performance Comparison With State-of-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…So, the fused features obtained from both sub-band images may be the reason for good classification performance. The proposed framework has not used any segmentation techniques and does not even require ground truth of segmented tumor which was used in other state-of-art methods [6]- [9]. This makes the proposed work more automated and easily implementable.…”
Section: Performance Comparison With State-of-art Methodsmentioning
confidence: 99%
“…The fused features have been extracted from two types of modules like the enhanced combined tumoral module and the Intra tumoral, peritumoral, and combined tumoral (IPC) module, which are then used for classification purposes. In [9], authors used UNet to segment lesion areas from sonographic images and the independent component analysis (ICA) method was used on segmented images for extracting features. The obtained features are then fused with deep automated features.…”
Section: Introductionmentioning
confidence: 99%
“…Qian et al [4] aggregated the multimodal ultrasound images for an explainable prediction to support the clinical decision-making of the sonographers and increase the confidence levels of the decision. Cui et al [26] proposed an FMRNet to fuse combined tumoral, intratumoral and peritumoral regions to represent the whole tumor heterogeneous. Di et al [27] introduced a saliency-guided approach to differentiate the foreground and background regions by two separated branches.…”
Section: A Breast Cancer Diagnosis In Ultrasound Imagesmentioning
confidence: 99%
“…28 In 2021, Cui et al constructed a deep CNN that can fully exploit the features of multiple regions and their potential relationships with categories. 29 These previous CNNs had achieved success in BUS classification. They could extract local features well but were not good at extracting global features; also, the reception field of CNN was still limited.…”
Section: Introductionmentioning
confidence: 99%