Background/Objectives: Image stitching can enhance the picture very pleasant by modifying and mixing the different aspects.Therefore, we present panoramic image stitching with efficient brightness fusion which is challenging in different bright sequences taken from different angles.Methods/Statistical analysis:For the problem of brightness, the input image is mixed with sequential images in different brightness.In this works, we proposed atechnique that blends multiple brightness using simple quality measures like color, saturation and contrast.The resulting image quality is good, and most important thing is, the method is efficient since it is simple. Thanthe resulting fused images is applied for panorama image stitching. We used multiband blending to prevent the blurring, and BRIEF (binary robust independent elementary features) method for feature descriptors. We solved the multi-image matching problem using Hamming Distanceusing the binary string based descriptors which is most similar features compare with the second most similar images.We proposed FLANN based matcher to get the more accurate results for using large datasets. We estimate Homography with the matching images using RANSAC algorithms.Findings:An effective structure is performed when we are able to resolve the brightness correction in expose too much or expose for too short a time and the appearance ghost. To solve the unification of brightness, we have collected the input images in different exposures, and selection of the good parts of each picture to an input image for stitching.We removed the blurring from input images, and solved multi-image matching using Hamming distance method. We found better results comparing other methods. For large dataset, we used FLANN based matcher, and estimated the Homography using RANSAC algorithm.Improvements/Applications:We have shown the performance of panoramic image stitching with efficient brightness fusion. We performed stitching with high regulationimages. Finally, we were able to create a panoramic image with efficient brightness fusion.
Sign language (SL) recognition is intended to connect deaf people with the general population via a variety of perspectives, experiences, and skills that serve as a basis for the development of human-computer interaction. Hand gesture-based SL recognition encompasses a wide range of human capabilities and perspectives. The efficiency of hand gesture performance is still challenging due to the complexity of varying levels of illumination, diversity, multiple aspects, self-identifying parts, different shapes, sizes, and complex backgrounds. In this context, we present an American Sign Language alphabet recognition system that translates sign gestures into text and creates a meaningful sentence from continuously performed gestures. We propose a segmentation technique for hand gestures and present a convolutional neural network (CNN) based on the fusion of features. The input image is captured directly from a video via a low-cost device such as a webcam and is pre-processed by a filtering and segmentation technique, for example the Otsu method. Following this, a CNN is used to extract the features, which are then fused in a fully connected layer. To classify and recognize the sign gestures, a well-known classifier such as Softmax is used. A dataset is proposed for this work that contains only static images of hand gestures, which were collected in a laboratory environment. An analysis of the results shows that our proposed system achieves better recognition accuracy than other state-of-the-art systems.
Diabetes is a long-term disease caused by the human body's inability to make enough insulin or to use it properly. This is one of the curses of the present world. Although it is not very severe in the initial stage, over time, it takes a deadly shape and gradually affects a variety of human organs, such as the heart, kidney, liver, eyes, and brain, leading to death. Many researchers focus on the machine and in-depth learning strategies to efficiently predict diabetes based on numerous risk variables such as insulin, BMI, and glucose in this healthcare issue. We proposed a robust approach based on the stacked ensemble method for predicting diabetes using several machine learning (ML) methods. The stacked ensemble comprises two models: the base model and the meta-model. Base models use a variety of models of ML, such as Support Vector Machine (SVM), K Nearest Neighbor (KNN), Naïve Bayes (NB), and Random Forest (RF), which make different assumptions about predictions, and meta-models make final predictions using Logistic Regression from predictive outputs from base models. To assess the efficiency of the proposed model, we have considered the PIMA Indian Diabetes Dataset (PIMA-IDD). We used linear and stratified sampling to ensure dataset consistency and K-fold cross-validation to prevent model overfitting. Experiments revealed that the proposed stacked ensemble model outperformed the model specified in the base classifier as well as the comprehensive methods, with an accuracy of 94.17%.
Human hand gestures are becoming one of the most important, intuitive, and essential means of recognizing sign language. Sign language is used to convey different meanings through visual-manual methods. Hand gestures help the hearing impaired to communicate. Nevertheless, it is very difficult to achieve a high recognition rate of hand gestures due to the environment and physical anatomy of human beings such as light condition, hand size, position, and uncontrolled environment. Moreover, the recognition of appropriate gestures is currently considered a major challenge. In this context, this paper proposes a probabilistic soft voting-based ensemble model to recognize Bengali sign gestures. We have divided this study into pre-processing, data augmentation and ensemble model-based voting process, and classification for gesture recognition. The purpose of pre-processing is to remove noise from input images, resize it, and segment hand gestures. Data augmentation is applied to create a larger database for in-depth model training. Finally, the ensemble model consists of a support vector machine (SVM), random forest (RF), and convolution neural network (CNN) is used to train and classify gestures. Whereas, the ReLu activation function is used in CNN to solve neuron death problems and to accelerate RF classification through principal component analysis (PCA). A Bengali Sign Number Dataset named “BSN-Dataset” is proposed for model performance. The proposed technique enhances sign gesture recognition capabilities by utilizing segmentation, augmentation, and soft-voting classifiers which have obtained an average of 99.50% greater performance than CNN, RF, and SVM individually, as well as significantly more accuracy than existing systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.