Infectious diseases are highly contagious due to rapid transmission and very challenging to diagnose in the early stage. Artificial Intelligence and Machine Learning now become a strategic weapon in assisting infectious disease prevention, rapid-response in diagnosis, surveillance, and management. In this paper, a bifold COVID_SCREENET architecture is introduced for providing COVID-19 screening solutions using Chest Radiography (CR) images. Transfer learning using nine pre-trained ImageNet models to extract the features of Normal, Pneumonia, and COVID-19 images is adapted in the first fold and classified using baseline Convolutional Neural Network (CNN). A Modified Stacked Ensemble Learning (MSEL) is proposed in the second fold by stacking the top five pre-trained models, and then the predictions resulted. Experimentation is carried out in two folds: In first fold, open-source samples are considered and in second fold 2216 real-time samples collected from Tamilnadu Government Hospitals, India, and the screening results for COVID data is 100% accurate in both the cases. The proposed approach is also validated and blind reviewed with the help of two radiologists at Thanjavur Medical College & Hospitals by collecting 2216 chest X-ray images between the month of April and May. Based on the reports, the measures are calculated for COVID_SCREENET and it showed 100% accuracy in performing multi-class classification.
The Editor-in-Chief and the publisher have retracted this article. This article was submitted to be part of a guestedited issue. An investigation concluded that the editorial process of this guest-edited issue was compromised by a third party and that the peer review process has been manipulated. Based on the investigation's findings the Editor-in-Chief therefore no longer has confidence in the results and conclusions of this article.The author disagrees with this retraction.Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The emergence of unsupervised generative models has resulted in greater performance in image and video generation tasks. However, existing generative models pose huge challenges in high-quality video generation process due to blurry and inconsistent results. In this paper, we introduce a novel generative framework named Dynamic Generative Adversarial Networks (Dynamic GAN) model for regulating the adversarial training and generating photorealistic high-quality sign language videos from skeletal poses. The proposed model comprises three stages of development such as generator network, classification and image quality enhancement and discriminator network. In the generator fold, the model generates samples similar to real images using random noise vectors, the classification of generated samples are carried out using the VGG-19 model and novel techniques are employed for improving the quality of generated samples in the second fold of the model and finally the discriminator networks fold identifies the real or fake samples. Unlike, existing approaches the proposed novel framework produces photo-realistic video quality results without using any animation or avatar approaches. To evaluate the model performance qualitatively and quantitatively, the proposed model has been evaluated using three benchmark datasets that yield plausible results. The datasets are RWTH-PHOENIX-Weather 2014T dataset, and our self-created dataset for Indian Sign Language (ISL-CSLTR), and the UCF-101 Action Recognition dataset. The output samples and performance metrics show the outstanding performance of our model.
In this research paper, the problem of vision-based sign language recognition which is used to translate signs to native or foreign language is addressed. This paper aims in designing a framework for segmenting and tracking skin objects from continuous signing videos and developing a fully automatic system to recognize signs that starts with breaking up signs into manageable subunits. A variety of spatiotemporal discriminative descriptors are extracted to form a feature vector for each subunit. A boosting algorithm is applied to the subunits to learn the subset of weak classifiers and combining them to strong classifier for each sign. The results obtained from the system shows that this proposed approach is promising for an effective and scalable system on real-world hand gesture recognition from continuous video sequences using boosted subunits.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.