2020
DOI: 10.1007/s10489-020-01901-2
|View full text |Cite
|
Sign up to set email alerts
|

Bangla-Meitei Mayek scripts handwritten character recognition using Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(9 citation statements)
references
References 43 publications
0
5
0
Order By: Relevance
“…In this study, we chose to create a custom neural network by designing its structure and training the whole network using our own data. The reason is some reports have shown this "built your own network from scratch" can achieve better performance than transfer learning [26,27].…”
Section: Improvement Ii: Stochastic Pooling Neural Networkmentioning
confidence: 99%
“…In this study, we chose to create a custom neural network by designing its structure and training the whole network using our own data. The reason is some reports have shown this "built your own network from scratch" can achieve better performance than transfer learning [26,27].…”
Section: Improvement Ii: Stochastic Pooling Neural Networkmentioning
confidence: 99%
“…The model has won the ICFHR2018 Competition on Automated Text Recognition on a READ dataset. The authors in [20] proposed a new architecture of a deep CNN with high recognition performance that is capable of learning deep features for visualization. According to the evaluation on the ICDAR-2013 offline HCCR competition dataset, the model has a relative 0.83% error reduction while having 49% fewer parameters and the same computational cost compared to the current state-of-the-art single-network method trained only on handwritten data.…”
Section: Related Workmentioning
confidence: 99%
“…Next, as a proxy for wind speed and cloud movement, we examined the degree to which radar echo maps influenced flood height in the target location at intervals of 30 min (30 min, 60 min, 90 min, 120 min, and 150 min). We employed the well-known early stopping procedure, which has proven highly effective in preventing the problem of overfitting during model training in a range of applications [40][41][42]. The judgment benchmark was 20 epochs, and the Adam optimizer was used with an initial learning rate of 0.001.…”
Section: Data Sets and Experiments Parametersmentioning
confidence: 99%