With technological advancements, smart health monitoring systems are gaining growing importance and popularity. Today, business trends are changing from physical infrastructure to online services. With the restrictions imposed during COVID-19, medical services have been changed. The concepts of smart homes, smart appliances, and smart medical systems have gained popularity. The Internet of Things (IoT) has revolutionized communication and data collection by incorporating smart sensors for data collection from diverse sources. In addition, it utilizes artificial intelligence (AI) approaches to control a large volume of data for better use, storing, managing, and making decisions. In this research, a health monitoring system based on AI and IoT is designed to deal with the data of heart patients. The system monitors the heart patient’s activities, which helps to inform patients about their health status. Moreover, the system can perform disease classification using machine learning models. Experimental results reveal that the proposed system can perform real-time monitoring of patients and classify diseases with higher accuracy.
Diagnosing burns in humans has become critical, as early identification can save lives. The manual process of burn diagnosis is time-consuming and complex, even for experienced doctors. Machine learning (ML) and deep convolutional neural network (CNN) models have emerged as the standard for medical image diagnosis. The ML-based approach typically requires handcrafted features for training, which may result in suboptimal performance. Conversely, DL-based methods automatically extract features, but designing a robust model is challenging. Additionally, shallow DL methods lack long-range feature dependency, decreasing efficiency in various applications. We implemented several deep CNN models, ResNeXt, VGG16, and AlexNet, for human burn diagnosis. The results obtained from these models were found to be less reliable since shallow deep CNN models need improved attention modules to preserve the feature dependencies. Therefore, in the proposed study, the feature map is divided into several categories, and the channel dependencies between any two channel mappings within a given class are highlighted. A spatial attention map is built by considering the links between features and their locations. Our attention-based model BuRnGANeXt50 kernel and convolutional layers are also optimized for human burn diagnosis. The earlier study classified the burn based on depth of graft and non-graft. We first classified the burn based on the degree. Subsequently, it is classified into graft and non-graft. Furthermore, the proposed model performance is evaluated on Burns_BIP_US_database. The sensitivity of the BuRnGANeXt50 is 97.22% and 99.14%, respectively, for classifying burns based on degree and depth. This model may be used for quick screening of burn patients and can be executed in the cloud or on a local machine. The code of the proposed method can be accessed at https://github.com/dhirujis02/Journal.git for the sake of reproducibility.
As digitization is increasing, threats to our data are also increasing at a faster pace. Generating fake videos does not require any particular type of knowledge, hardware, memory, or any computational device; however, its detection is challenging. Several methods in the past have solved the issue, but computation costs are still high and a highly efficient model has yet to be developed. Therefore, we proposed a new model architecture known as DFN (Deep Fake Network), which has the basic blocks of mobNet, a linear stack of separable convolution, max-pooling layers with Swish as an activation function, and XGBoost as a classifier to detect deepfake videos. The proposed model is more accurate compared to Xception, Efficient Net, and other state-of-the-art models. The DFN performance was tested on a DFDC (Deep Fake Detection Challenge) dataset. The proposed method achieved an accuracy of 93.28% and a precision of 91.03% with this dataset. In addition, training and validation loss was 0.14 and 0.17, respectively. Furthermore, we have taken care of all types of facial manipulations, making the model more robust, generalized, and lightweight, with the ability to detect all types of facial manipulations in videos.
Dispersal among species is an important factor that can govern the prey–predator model’s dynamics and cause a variety of spatial structures on a geographical scale. These structures form when passive diffusion interacts with the reaction part of the reaction–diffusion system in such a way that even if the reaction lacks symmetry-breaking capabilities, diffusion can destabilize the symmetry and allow the system to have them. In this article, we look at how dispersal affects the prey–predator model with a Hassell–Varley-type functional response when predators do not form tight groups. By considering linear stability, the temporal stability of the model and the conditions for Hopf bifurcation at feasible equilibrium are derived. We explored spatial stability in the presence of diffusion and developed the criterion for diffusion-driven instability. Using amplitude equations, we then investigated the selection of Turing patterns around the Turing bifurcation threshold. The examination of the stability of these amplitude equations led to the discovery of numerous Turing patterns. Finally, numerical simulations were performed to validate the outcomes of the analysis. The outcomes of the theoretical study and numerical simulation were accorded. Our findings demonstrate that spatial patterns are sensitive to dispersal and predator death rates.
A typical video record aggregation system requires the concurrent performance of a large number of image processing tasks, including but not limited to image acquisition, pre-processing, segmentation, feature extraction, verification, and description. These tasks must be executed with utmost precision to ensure smooth system performance. Among these tasks, feature extraction and selection are the most critical. Feature extraction involves converting the large-scale image data into smaller mathematical vectors, and this process requires great skill. Various feature extraction models are available, including wavelet, cosine, Fourier, histogram-based, and edge-based models. The key objective of any feature extraction model is to represent the image data with minimal attributes and no loss of information. In this study, we propose a novel feature-variance model that detects differences in video features and generates feature-reduced video frames. These frames are then fed into a GRU-based RNN model, which classifies them as either keyframes or non-keyframes. Keyframes are then extracted to create a summarized video, while non-keyframes are reduced. Various key-frame extraction models are also discussed in this section, followed by a detailed analysis of the proposed summarization model and its results. Finally, we present some interesting observations about the proposed model and suggest ways to improve it.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.