2022
DOI: 10.4314/njtd.v19i3.2
|View full text |Cite
|
Sign up to set email alerts
|

Development of an American Sign Language Recognition System using Canny Edge and Histogram of Oriented Gradient

Abstract: Sign language is used by people who have hearing and speaking difficulties, but not understood by many without these difficulties. Therefore, sign language recognition systems are developed to aid communication between hearing impaired people and others. This paper developed a static American Sign Language Recognition (ASLR) system using canny-edge and histogram of oriented gradient (HOG) for feature extraction with K-Nearest Neighbour (K-NN) as classifier. The sign language image datasets used consist of Engl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 10 publications
0
0
0
Order By: Relevance
“…In their work, 50% of the images were used for training and validation, and 50% were used for testing. YOLOv4 for the facilitation of VIPs in outdoor environments was employed in [15]. They used a custom dataset of two classes: gutter and bollard.…”
Section: Related Studiesmentioning
confidence: 99%
See 1 more Smart Citation
“…In their work, 50% of the images were used for training and validation, and 50% were used for testing. YOLOv4 for the facilitation of VIPs in outdoor environments was employed in [15]. They used a custom dataset of two classes: gutter and bollard.…”
Section: Related Studiesmentioning
confidence: 99%
“…The Edge Box-SSD technique may not perform well in increasingly complicated and high-resolution situations seen in real-world object identification applications. Adeyanju et al [15] applied CNN for object detection on a custom dataset consisting of two classes. They obtained a high accuracy of 84%.…”
Section: Comparison With State-of-the-art Workmentioning
confidence: 99%
“…since there are two branches of the CNN model for feature extraction for each layer of the model. In[31], the authors have used fused features of pre-trained VGG16 and attention-based VGG16.The model has a computational complexity of 2 * O (∑ L i=1 S 2 L * K 2 L * C L−1 * C L + ∑ f i=1 F f ) +O(k * h).The study[32] involves canny edge detection, HOG feature extraction, and KNN for classification, which has an overall time complexity of O (m * n log m * n) + O (n 2 ) + O(d * F), where m * n is the image size, d, and F is the number of data points and features, respectively. In [34], authors have used a vision transformer encoder with a time complexity of O (8 * s 2 * d) + O (8 * s * d 2 ), where s is the sequence length and d is the depth.…”
mentioning
confidence: 99%