2022
DOI: 10.1155/2022/4983174
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning with Feature Extraction Modules for Improved Classifier Performance on Medical Image Data

Abstract: Transfer learning attempts to use the knowledge learned from one task and apply it to improve the learning of a separate but similar task. This article proposes to evaluate this technique’s effectiveness in classifying images from the medical domain. The article presents a model TrFEMNet (Transfer Learning with Feature Extraction Modules Network), for classifying medical images. Feature representations from General Feature Extraction Module (GFEM) and Specific Feature Extraction Module (SFEM) are input to a pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…The goal is to combine the transfer learning technique with the FIN to generate imitated feature maps from the significant features revealed in this study to improve the predictive power of the adaptive models. As future work, we believe that if the K-SOM is combined with the Transfer Learning with Feature Extraction Modules Network (TrFEMNet) 156 , Feature representations from General Feature Extraction Module (GFEM) 156 , and Specific Feature Extraction Module (SFEM) 156 , it would generate more stable and in-depth results for further modeling. The major role for the transfer learning will come into play when the significant radiomics features/biomarkers discovered by this study (in form of K-SOM topology or feature space) are used as a learned representation for physiological knowledge transfer into the clinical setting that uses the same imaging concept with a slightly different DCE-MRI pulse sequence (SPGRE).…”
Section: Discussionmentioning
confidence: 99%
“…The goal is to combine the transfer learning technique with the FIN to generate imitated feature maps from the significant features revealed in this study to improve the predictive power of the adaptive models. As future work, we believe that if the K-SOM is combined with the Transfer Learning with Feature Extraction Modules Network (TrFEMNet) 156 , Feature representations from General Feature Extraction Module (GFEM) 156 , and Specific Feature Extraction Module (SFEM) 156 , it would generate more stable and in-depth results for further modeling. The major role for the transfer learning will come into play when the significant radiomics features/biomarkers discovered by this study (in form of K-SOM topology or feature space) are used as a learned representation for physiological knowledge transfer into the clinical setting that uses the same imaging concept with a slightly different DCE-MRI pulse sequence (SPGRE).…”
Section: Discussionmentioning
confidence: 99%
“…This research aimed to combine respiratory sound, CNN classification, and time-series feature extraction from pre-trained images. We also assessed the performance of several feature extractors [34]. The LSTM model is utilized to obtain the findings after the MFCC model has been used for feature extraction [35].…”
Section: Related Work Historymentioning
confidence: 99%
“…Chunfeng Lian et al, proposed an attention-guided deep learning framework for dementia diagnosis using structural MRI. This framework combines a fully convolutional network (FCN) and a multi-branch hybrid network (HybNet) to jointly learn and fuse sMRI features for CAD model development [6] proposed the TrFEMNet model for medical image classification, showing comparable performance to other models, especially on complex datasets [7]. Mohammad Monirujjaman Khan et al, explored the application of convolutional neural networks (CNNs) for diagnosing brain cancers using medical images.…”
Section: Literature Reviewmentioning
confidence: 99%
“…For multiclass classification the score of each class is calculated and then aggregated as defined in Eq. (7). to assess the overall performance of the model.…”
Section: F1-scorementioning
confidence: 99%
See 1 more Smart Citation