2017 1st International Workshop on Arabic Script Analysis and Recognition (ASAR) 2017
DOI: 10.1109/asar.2017.8067749
|View full text |Cite
|
Sign up to set email alerts
|

Multi-layer recurrent neural network based offline Arabic handwriting recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…In 2017, Chen et al [17] presented a segmentation-free approach of RNN with a fourlayer bidirectional Gated Recurrent Unit (GRU) network with a CTC output layer and combined it with the dropout technique. The authors evaluated the system performance on the IFN/ENIT database with the "abcd-e" scenario.…”
Section: Related Workmentioning
confidence: 99%
“…In 2017, Chen et al [17] presented a segmentation-free approach of RNN with a fourlayer bidirectional Gated Recurrent Unit (GRU) network with a CTC output layer and combined it with the dropout technique. The authors evaluated the system performance on the IFN/ENIT database with the "abcd-e" scenario.…”
Section: Related Workmentioning
confidence: 99%
“…Chen et al [ 76 ] provided a segmentation-free method of RNN with a four-layer bidirectional Gated Recurrent Unit (GRU) network along with a Connectionist Temporal Classification (CTC) output layer and combined it with the dropout technique, which was claimed to improve the system’s generalization ability. The RRN-GRU was designed to identify words in offline handwritten Arabic.…”
Section: Related Workmentioning
confidence: 99%
“…The CNN model can be exploited in two different ways: either by designing the model manually or automatically and then achieving the training process from scratch [ 59 , 60 , 61 , 62 ] or by employing the Transfer Learning (TL) strategies to leverage features from off-the-shelf pre-trained models on bigger databases [ 30 , 63 ]. There are many available state-of-the-arts pre-trained CNN models that have been trained on the ImageNet database [ 64 ], such as VGGNet [ 65 ], GoogLeNet [ 66 ], ResNet [ 67 ], InceptionV3 [ 68 ], Xception [ 69 ], MobileNet [ 70 ], and DenseNet [ 71 ].The TL technique is used to transfer the acquired knowledge from one or more tasks in the source domain to another task in the target domain [ 28 , 63 , 72 , 73 , 74 , 75 ] by utilizing a pre-trained network from a source domain that has a considerable amount of training data [ 74 , 76 ], and this helps in boosting the recognition accuracy or reducing training time [ 74 ]. The two widely utilized TL strategies are the feature extraction strategy from prior trained data and the fine-tuning strategy of the applied pre-trained network [ 63 , 77 ].…”
Section: Introductionmentioning
confidence: 99%
“…Anne et al [129] proposed word recognition technique based on HMM. Liren et al [130] proposed segmentation-free approach which used RNN. These networks are based on deep learning concept and used four-bidirectional gated recurrent units with dropout layer for generalisation.…”
Section: Comparison With Other Languagesmentioning
confidence: 99%