2022
DOI: 10.32604/cmc.2022.031247
|View full text |Cite
|
Sign up to set email alerts
|

Hyperparameter Tuning Bidirectional Gated Recurrent Unit Model for Oral Cancer Classification

Abstract: Oral Squamous Cell Carcinoma (OSCC) is a type of Head and Neck Squamous Cell Carcinoma (HNSCC) and it should be diagnosed at early stages to accomplish efficient treatment, increase the survival rate, and reduce death rate. Histopathological imaging is a wide-spread standard used for OSCC detection. However, it is a cumbersome process and demands expert's knowledge. So, there is a need exists for automated detection of OSCC using Artificial Intelligence (AI) and Computer Vision (CV) technologies. In this backg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…Textural characteristics were extracted from the images as well by converting them to the Gray Level Co-Occurrence Matrix (GLCM) [13] and the Local Binary Pattern (LBP) [14] because the GLCM determines an image's texture by determining how frequently pairs of pixels with specific values and spatial relationships appear in the image [15] and the local spatial patterns and the contrast in the grey scale in an image are effectively captured by LBP descriptors [16]. With the most recent advancements in machine learning, numerous deep learning-based techniques, including convolutional neural network (CNN), pre-trained deep CNN networks [17], like Alexnet, VGG 16, VGG 19, ResNet 50 [18], MobileNet [19], multimodal fusion with CoaT (coat-lite-small), PiT (pooling based vision transformer pits-distilled-224), ViT (vision transformer small-patch16-384), ResNetV2 and ResNetY [20], and concatenated models of VGG 16, Inception V3 [21], have been proposed for the automated extraction of morphological features. After the feature extraction, the images were classified into normal and OSCC categories using different classifiers such as random forest [22], support vector machine (SVM) [10], extreme gradient boosting (XGBoost) with binary particle swarm optimization (BPSO) feature selection [23], K nearest neighbor (KNN) [10], duck patch optimization based deep learning method [24] and two pretrained models, ResNet 50 and DenseNet 201 [11].…”
Section: Introductionmentioning
confidence: 99%
“…Textural characteristics were extracted from the images as well by converting them to the Gray Level Co-Occurrence Matrix (GLCM) [13] and the Local Binary Pattern (LBP) [14] because the GLCM determines an image's texture by determining how frequently pairs of pixels with specific values and spatial relationships appear in the image [15] and the local spatial patterns and the contrast in the grey scale in an image are effectively captured by LBP descriptors [16]. With the most recent advancements in machine learning, numerous deep learning-based techniques, including convolutional neural network (CNN), pre-trained deep CNN networks [17], like Alexnet, VGG 16, VGG 19, ResNet 50 [18], MobileNet [19], multimodal fusion with CoaT (coat-lite-small), PiT (pooling based vision transformer pits-distilled-224), ViT (vision transformer small-patch16-384), ResNetV2 and ResNetY [20], and concatenated models of VGG 16, Inception V3 [21], have been proposed for the automated extraction of morphological features. After the feature extraction, the images were classified into normal and OSCC categories using different classifiers such as random forest [22], support vector machine (SVM) [10], extreme gradient boosting (XGBoost) with binary particle swarm optimization (BPSO) feature selection [23], K nearest neighbor (KNN) [10], duck patch optimization based deep learning method [24] and two pretrained models, ResNet 50 and DenseNet 201 [11].…”
Section: Introductionmentioning
confidence: 99%