2022
DOI: 10.1007/s00530-022-00911-z
|View full text |Cite
|
Sign up to set email alerts
|

Optimized generative adversarial network based breast cancer diagnosis with wavelet and texture features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 48 publications
0
4
0
Order By: Relevance
“…Apart from the TL models discussed, there are a few recent studies that have used dual-view CNN (AlGhamdi and Abdel-Mottaleb, 2021;Wu et al, 2019), multi-task CNN (Gao et al, 2020;Wimmer et al, 2021). In addition, new techniques have emerged in the latest studies, such as attention based CNNs which use the attention block to assist the network in focusing on discriminating the characteristics of images (Al-Hejri et al, 2022;Songsaeng et al, 2021;Wang et al, 2022;Sun et al, 2023;Hina et al, 2020;Zhang et al, 2020), Generative Adversarial Networks Deep Adversarial Domain adaptation (Wang et al, 2021a) which uses domain adaptation for classification, Generative Adversarial Network in which the generator generates realistic images and the discriminator differentiates between real and synthetic data, along with performing classification tasks (Park et al, 2023;Ponraj and Canessane, 2023b;Shivhare and Saxena, 2022), and Student teacher InterNRL (Wang et al, 2023) in which the student serves as a prototype-based classifier and teacher represents a image classifier. Auto Encoders (Hamza, 2023) trained on labeled data and contributed to unsupervised feature learning and supervised classification, while Visual Transformers (ViT) (Chen et al, 2022;Xia et al, 2023;Prodan et al, 2023) utilize a transformer-style architecture across patches of the image, and semi-supervised (Calderon-Ramirez et al, 2022) were also used to overcome the missing annotation problem.…”
Section: Deep Learning In Breast Cancer Diagnosismentioning
confidence: 99%
See 2 more Smart Citations
“…Apart from the TL models discussed, there are a few recent studies that have used dual-view CNN (AlGhamdi and Abdel-Mottaleb, 2021;Wu et al, 2019), multi-task CNN (Gao et al, 2020;Wimmer et al, 2021). In addition, new techniques have emerged in the latest studies, such as attention based CNNs which use the attention block to assist the network in focusing on discriminating the characteristics of images (Al-Hejri et al, 2022;Songsaeng et al, 2021;Wang et al, 2022;Sun et al, 2023;Hina et al, 2020;Zhang et al, 2020), Generative Adversarial Networks Deep Adversarial Domain adaptation (Wang et al, 2021a) which uses domain adaptation for classification, Generative Adversarial Network in which the generator generates realistic images and the discriminator differentiates between real and synthetic data, along with performing classification tasks (Park et al, 2023;Ponraj and Canessane, 2023b;Shivhare and Saxena, 2022), and Student teacher InterNRL (Wang et al, 2023) in which the student serves as a prototype-based classifier and teacher represents a image classifier. Auto Encoders (Hamza, 2023) trained on labeled data and contributed to unsupervised feature learning and supervised classification, while Visual Transformers (ViT) (Chen et al, 2022;Xia et al, 2023;Prodan et al, 2023) utilize a transformer-style architecture across patches of the image, and semi-supervised (Calderon-Ramirez et al, 2022) were also used to overcome the missing annotation problem.…”
Section: Deep Learning In Breast Cancer Diagnosismentioning
confidence: 99%
“…Inception-v3 improved the model by using factorised convolutions and dimension reductions. (Babu and Jerome, 2022;Mahmood et al, 2022;Shen et al, 2020;Wang et al, 2019;Zhang et al, 2021a), (Albalawi et al, 2022;Cai et al, 2021;Chandraraju and Jeyaprakash, 2022;Gargouri et al, 2022;George Melekoodappattu et al, 2022;Jones et al, 2023;Kurek et al, 2018;Mahmoud et al, 2021;Mokni and Haoues, 2022;Patil and Biradar, 2021;Patil et al, 2022;Razali et al, 2023a;Sajid et al, 2023;Shivhare and Saxena, 2022;Arputham et al, 2021;Chouhan et al, 2021;Chugh et al, 2022;Ramesh et al, 2022;Sivakrithika and Dinakaran, 2018;Ranjbarzadeh et al, 2023a;Song et al, 2020;Yu et al, 2023c;Elmoufidi, 2022;Hosni Mahmoud et al, 2022) (Babu andJerome, 2022), (Razali et al, 2023a;Arputham et al, 2021;Chouhan et al, 2021) (Mahmood et al, 2022;Shen et al, 2020;Wang et al, 2019;Zhang et al, 2021a), (Albalawi et al, 2022;Razali e...…”
Section: Rq2: What Are the Unique Challenges And Considerations In Ap...mentioning
confidence: 99%
See 1 more Smart Citation
“…Generative Adversarial Networks (GAN [24]) are primarily known for their ability to generate realistic data, but they can also play a role in improving breast cancer detection through various innovative approaches [60,61]. GAN can be used to augment medical imaging datasets, including those used for breast cancer detection.…”
Section: Gan For Breast Cancer Detectionmentioning
confidence: 99%