“…Figure 4 depicts the transformation of American Sign Language images to grayscale format prior to segmentation. SIFT points were calculated from the hand objects that were segmented to detect meaningful features, as in (15). A feature descriptor approach was employed to transform important image points into significant vector points.…”
Section: Testbed Environment and Resultsmentioning
confidence: 99%
“…The architecture of neural networks (NN) internal design in grayscale, four letters were considered to highlight the suggested framework's efficiency. Figure 5 shows the gathered data utilized for image segmenting, SIFT algorithm, and measuring important keypoints of the human hand motion for letters "H," "A," "N," and "D." Using above mentioned (15) and Figure 6, the study calculated the actual mean value (ɱ), the standard deviation ( Š), the variance (ɣ), and the average deviation (Ã) for the characteristic of the selected Tp. Table 3 shows the information.…”
Section: Testbed Environment and Resultsmentioning
confidence: 99%
“…As a result, picture augmentation is required for proper identification. Researchers in earlier studies [14,15] developed a variety of strategies for vision-based hand gesture identification for sign language recognition. Additionally, previous research [16,17] comprehensively reviewed hand gesture approaches to compensate for light changes.…”
In this study, the impact of varying lighting conditions on recognition and decision‐making was considered. The luminosity approach was presented to increase gesture recognition performance under varied lighting. An efficient framework was proposed for sensor‐based sign language gesture identification, including picture acquisition, preparing data, obtaining features, and recognition. The depth images were collected using multiple Microsoft Kinect devices, and data were acquired by varying resolutions to demonstrate the idea. A case study was designed to attain acceptable accuracy in gesture recognition under variant lighting. Using American Sign Language (ASL), the dataset was created and analyzed under various lighting conditions. In ASL‐based images, significant feature points were selected using the scale‐invariant feature transformation (SIFT). Finally, an artificial neural network (ANN) classified hand gestures using specified characteristics for validation. The suggested method was successful across a variety of illumination conditions and different image sizes. The total effectiveness of NN architecture was shown by the 97.6% recognition accuracy rate of 26 alphabets dataset with just a 2.4% error rate.
“…Figure 4 depicts the transformation of American Sign Language images to grayscale format prior to segmentation. SIFT points were calculated from the hand objects that were segmented to detect meaningful features, as in (15). A feature descriptor approach was employed to transform important image points into significant vector points.…”
Section: Testbed Environment and Resultsmentioning
confidence: 99%
“…The architecture of neural networks (NN) internal design in grayscale, four letters were considered to highlight the suggested framework's efficiency. Figure 5 shows the gathered data utilized for image segmenting, SIFT algorithm, and measuring important keypoints of the human hand motion for letters "H," "A," "N," and "D." Using above mentioned (15) and Figure 6, the study calculated the actual mean value (ɱ), the standard deviation ( Š), the variance (ɣ), and the average deviation (Ã) for the characteristic of the selected Tp. Table 3 shows the information.…”
Section: Testbed Environment and Resultsmentioning
confidence: 99%
“…As a result, picture augmentation is required for proper identification. Researchers in earlier studies [14,15] developed a variety of strategies for vision-based hand gesture identification for sign language recognition. Additionally, previous research [16,17] comprehensively reviewed hand gesture approaches to compensate for light changes.…”
In this study, the impact of varying lighting conditions on recognition and decision‐making was considered. The luminosity approach was presented to increase gesture recognition performance under varied lighting. An efficient framework was proposed for sensor‐based sign language gesture identification, including picture acquisition, preparing data, obtaining features, and recognition. The depth images were collected using multiple Microsoft Kinect devices, and data were acquired by varying resolutions to demonstrate the idea. A case study was designed to attain acceptable accuracy in gesture recognition under variant lighting. Using American Sign Language (ASL), the dataset was created and analyzed under various lighting conditions. In ASL‐based images, significant feature points were selected using the scale‐invariant feature transformation (SIFT). Finally, an artificial neural network (ANN) classified hand gestures using specified characteristics for validation. The suggested method was successful across a variety of illumination conditions and different image sizes. The total effectiveness of NN architecture was shown by the 97.6% recognition accuracy rate of 26 alphabets dataset with just a 2.4% error rate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.