2021
DOI: 10.1109/access.2021.3049600
|View full text |Cite
|
Sign up to set email alerts
|

Dermoscopy Image Classification Based on StyleGAN and DenseNet201

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 68 publications
(42 citation statements)
references
References 46 publications
0
35
0
Order By: Relevance
“…For classification purposes [ 22 ], 2 dense layers with neurons are enclosed. The feature extraction with sigmoid activation function and DenseNet-201 is used for calculating dual classifications, with softmax activation function used as conventional DenseNet-201 architecture.…”
Section: The Proposed Modelmentioning
confidence: 99%
“…For classification purposes [ 22 ], 2 dense layers with neurons are enclosed. The feature extraction with sigmoid activation function and DenseNet-201 is used for calculating dual classifications, with softmax activation function used as conventional DenseNet-201 architecture.…”
Section: The Proposed Modelmentioning
confidence: 99%
“…Due to its high accuracy in many fields, deep learning has become the most advanced machine learning technology. Deep learning and CNNs have been successfully used in breast cancer detection [ 7 ], skin cancer recognition [ 73 ], and COVID-19 recognition and analysis [ 26 ]. Among them, there are many studies based on convolutional neural networks, and convolutional neural networks (CNNs) have been the standard for 3D medical image classification and segmentation.…”
Section: Introductionmentioning
confidence: 99%
“…Generative neural networks (GNNs) are a class of deep neural network models that represent the state-of-the-art technique that can be leveraged to democratize the mass synthesis of manipulated digital content 21,22 . The have been used to fabricate images by training them to encode human features 23 , to manipulate images via replacing speci c components of a digital image or video 24 , and to create videos via animation of a still image with the characteristics of a source video 25 .…”
Section: Introductionmentioning
confidence: 99%