Although deaf people represent over 5% of the world's population, according to what the World Health Organization stated in May 2022, they suffer from social and economic marginalization. One way to improve the lives of deaf people is to try to make communication between them and others easier. Sign language, the means through which deaf people can communicate with other people, can benefit from modern techniques in machine learning. In this study, several convolutional neural networks (CNN) models are designed to develop an efficient model, in terms of accuracy and computational time, for the classification of different signs. This research presents a methodology for developing an efficient CNN architecture from scratch to classify multiple sign language alphabets, which has numerous advantages over other contemporary CNN models in terms of prediction time and accuracy. This framework analyses the effect of varying CNN hyper-parameters, such as kernel size, number of layers, and number of filters in each layer, and picks the ideal parameters for CNN model construction. In addition, the suggested CNN architecture operates directly on unprocessed data without the need for preprocessing to generalize it across other datasets. In addition, the capacity of the model to generalize to diverse sign languages is rigorously evaluated using three distinct sign language alphabets and five datasets, namely, Arabic (ArSL), two American English (ASL), Korean (KSL), and the combination of Arabic and American datasets. The proposed CNN architecture (SL-CNN) outperforms state-of-the-art CNN models and traditional machine learning models achieving an accuracy of 100%, 98.47%, 100%, and 99.5% for English, Arabic, Korean, and combined Arabic-English alphabets, respectively. The prediction or inference time of the model is about three milliseconds on average, making it suitable for real-time applications. So, in the future, it is easy to turn this model into a mobile application.