2023
DOI: 10.1016/j.neucom.2023.03.028
|View full text |Cite
|
Sign up to set email alerts
|

Neural network model based on global and local features for multi-view mammogram classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…Our proposed method not only employs CNNs to extract features from bilateral mammograms but also realizes local-to-global area feature attention correlation learning between the two breasts, overcoming the limitation of the CNN’s local receptive field. Additionally, in terms of model design, current works [ 5 , 8 , 9 , 10 , 11 , 12 , 14 , 33 , 34 , 35 , 36 ] mainly focus on improving the diagnostic accuracy for individual breasts, with only a few methods designing models for four-view analysis [ 4 , 18 , 19 , 37 ]. However, these models fail to fully explore the relationship between bilateral mammograms.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Our proposed method not only employs CNNs to extract features from bilateral mammograms but also realizes local-to-global area feature attention correlation learning between the two breasts, overcoming the limitation of the CNN’s local receptive field. Additionally, in terms of model design, current works [ 5 , 8 , 9 , 10 , 11 , 12 , 14 , 33 , 34 , 35 , 36 ] mainly focus on improving the diagnostic accuracy for individual breasts, with only a few methods designing models for four-view analysis [ 4 , 18 , 19 , 37 ]. However, these models fail to fully explore the relationship between bilateral mammograms.…”
Section: Related Workmentioning
confidence: 99%
“…The prevailing approach involves using multi-view image information as the input for models. For works that use ipsilateral views as model inputs, the CC and MLO views are used for feature extraction for cancer diagnosis [ 8 , 9 ], but each view extracts features independently and then combines them, resulting in a lack of information interaction between views, leading to the loss of inter-view information relationships. In 2021, van Tulder et al [ 10 ] introduced an inter-view attention mechanism for information transfer between views.…”
Section: Introductionmentioning
confidence: 99%