American Sign Language (ASL) classification is crucial in facilitating communication for individuals with hearing impairments. Traditional methods rely heavily on manual interpretation, which can be time-consuming and error-prone. Inspired by the success of deep learning techniques in image processing, the paper explores the application of Convolutional Neural Networks (CNNs) for ASL classification. The paper presents a CNN architecture tailored specifically for this task and investigates the effectiveness of transfer learning by leveraging four pre-trained models: VGG16, InceptionV3, ResNet50, and DenseNet121. A comparative analysis of these architectures has been presented in this paper. The experimental results show that the customized CNN model outperformed other models with a testing accuracy of 99.93% when provided with testing set images. Consequently, it is concluded that customized CNN outshines other models in accurately classifying sign languages.