This paper suggest approach to solve the problem of social communication between blind and dumb by converting voices of 28 Arabic letters (ي,.........,أ) into gesture (images) by extraction features by using Mel-frequency Cepstral coefficients (MFCC)and classify the types of letters by using; J48, KNN, and Naive byes (NB). Several features are extracted from speech voice of Arabic letters voices. The dataset collected by recorded voices from twenty different persons, each person recorded ten voices for each twenty eight letters so the total dataset are 5600 voices (200 voices for each 28 letters). Mel-frequency Cepstral coefficients are extracted from 5600 voices of letters which convert the voices into a signal and extract features vector to classify later by using J48, KNN and NB algorithms, which may vary in time or speed signals. The experimental results shows that the best accuracy of speech recognition algorithm by using the J48 algorithm with a performance ratio of 100% while KNN is the 94.023% and Naive byes is the 20.012%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.