2020
DOI: 10.30534/ijeter/2020/55892020
|View full text |Cite
|
Sign up to set email alerts
|

Recognition of Baybayin (Ancient Philippine Character) Handwritten Letters Using VGG16 Deep Convolutional Neural Network Model

Abstract: We proposed a system that can convert 45 handwritten baybayin Philippine character/s into their corresponding Tagalog word/s equivalent through convolutional neural network (CNN) using Keras. The implemented architecture utilizes smaller, more compact type of VGG16 network. The classification used 1500 images from each 45 baybayin characters. The pixel values resulting from the resized characters (50x50 pixels) of the segmentation stage have been utilized for training the system, thus, achieving a 99.54% accur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 11 publications
(11 reference statements)
0
2
0
Order By: Relevance
“…[6] Another study used 45 classes of Baybayin Character with a total of 108,000 images contained in the dataset that was collected on 90 different people to draw the Baybayin Script. 50 iterations with a batch size of 32 to perform training and testing [4]. Using DCNN (Deep Convolutional Neural Network) model with VGG16 as their architecture and a module that is image processing using OpenCV library for their character recognition.…”
Section: Review Of Related Literaturementioning
confidence: 99%
See 1 more Smart Citation
“…[6] Another study used 45 classes of Baybayin Character with a total of 108,000 images contained in the dataset that was collected on 90 different people to draw the Baybayin Script. 50 iterations with a batch size of 32 to perform training and testing [4]. Using DCNN (Deep Convolutional Neural Network) model with VGG16 as their architecture and a module that is image processing using OpenCV library for their character recognition.…”
Section: Review Of Related Literaturementioning
confidence: 99%
“…To segment their data to training and testing, the researchers used 80% as their training data and 20% for their testing, their result of their training and testing with the accuracy of 99.54% and an overall rating of 98.84%. [4] Another set researchers used SVM (Support Vector Machine) model to train using 1000 Images of Baybayin Words. Using Words to train their model gave them 97.9% recognition accuracy.…”
Section: Review Of Related Literaturementioning
confidence: 99%