2021
DOI: 10.1186/s12911-020-01340-6
|View full text |Cite
|
Sign up to set email alerts
|

Richer fusion network for breast cancer classification based on multimodal data

Abstract: Background Deep learning algorithms significantly improve the accuracy of pathological image classification, but the accuracy of breast cancer classification using only single-mode pathological images still cannot meet the needs of clinical practice. Inspired by the real scenario of pathologists reading pathological images for diagnosis, we integrate pathological images and structured data extracted from clinical electronic medical record (EMR) to further improve the accuracy of breast cancer c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 29 publications
(34 citation statements)
references
References 30 publications
0
30
0
Order By: Relevance
“…To accomplish this, various methods were used in the studies, including neural network-based features extraction, data generation through software, or manual extraction of features. Out of the 22 early fusion studies, 19 studies 12 , 13 , 15 , 25 , 33 36 , 39 , 41 45 , 50 – 53 used manual or software-based imaging features, and 3 studies used neural network-based architectures to extract imaging features before combining with other clinical data modality 16 , 18 , 54 . Six out of the 19 studies that used manual or software-based features reduced the feature dimension before concatenating the two modalities’ features using different methods 25 , 36 , 45 , 50 – 52 .…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…To accomplish this, various methods were used in the studies, including neural network-based features extraction, data generation through software, or manual extraction of features. Out of the 22 early fusion studies, 19 studies 12 , 13 , 15 , 25 , 33 36 , 39 , 41 45 , 50 – 53 used manual or software-based imaging features, and 3 studies used neural network-based architectures to extract imaging features before combining with other clinical data modality 16 , 18 , 54 . Six out of the 19 studies that used manual or software-based features reduced the feature dimension before concatenating the two modalities’ features using different methods 25 , 36 , 45 , 50 – 52 .…”
Section: Resultsmentioning
confidence: 99%
“…Fourteen early fusion studies evaluated their fusion models’ performance against that of single modality models 12 , 13 , 15 , 16 , 18 , 25 , 32 – 34 , 36 , 41 44 , 51 . As a result, 13 of these studies exhibited a better performance for fusion when compared with their imaging-only and clinical-only counterparts 12 , 13 , 15 , 16 , 18 , 25 , 32 – 34 , 41 44 , 51 .…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations