2021
DOI: 10.1109/access.2021.3087647
|View full text |Cite
|
Sign up to set email alerts
|

A Hybrid Deep Model for Recognizing Arabic Handwritten Characters

Abstract: Handwriting recognition for computer systems has been in research for a long time, with different researchers having an extensive variety of methods at their disposal. The problem is that most of these experiments are done inEnglish, as it is the most spoken language in the world. But other languages such as Arabic, Mandarin, Spanish, French, and Russian also need research done on them since there are millions of people who speak them. In this work, recognizing and developing Arabic handwritten characters is p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 31 publications
(19 citation statements)
references
References 24 publications
0
11
0
Order By: Relevance
“…Each layer is followed by a rectified linear unit (ReLU) [38]. This is followed by an average-pooling layer and a dense layer [39]. There are also two normalization layers, two fully connected layers, and a softmax layer.…”
Section: Alexnetmentioning
confidence: 99%
“…Each layer is followed by a rectified linear unit (ReLU) [38]. This is followed by an average-pooling layer and a dense layer [39]. There are also two normalization layers, two fully connected layers, and a softmax layer.…”
Section: Alexnetmentioning
confidence: 99%
“…Using 29 classes that segregated forms from the Hijja dataset, NASEEM ALROBAH AND SALEH ALBAHLI [12]created a hybridized architecture in 2020 with the goal of classifying Arabic handwriting characteristics. Feature extraction utilizing two architectures is the initial stage.…”
Section: Related Workmentioning
confidence: 99%
“…In residual networks, some weights from previous layers are added to a later layer bypassing layer in between. These bypassing layers contain weights influenced by immediately preceding layers as well as from layers from deeper layers [40]. This solved the vanishing gradient problem and training the deep neural network was feasible.…”
Section: Modelsmentioning
confidence: 99%