With technological advancements, smart health monitoring systems are gaining growing importance and popularity. Today, business trends are changing from physical infrastructure to online services. With the restrictions imposed during COVID-19, medical services have been changed. The concepts of smart homes, smart appliances, and smart medical systems have gained popularity. The Internet of Things (IoT) has revolutionized communication and data collection by incorporating smart sensors for data collection from diverse sources. In addition, it utilizes artificial intelligence (AI) approaches to control a large volume of data for better use, storing, managing, and making decisions. In this research, a health monitoring system based on AI and IoT is designed to deal with the data of heart patients. The system monitors the heart patient’s activities, which helps to inform patients about their health status. Moreover, the system can perform disease classification using machine learning models. Experimental results reveal that the proposed system can perform real-time monitoring of patients and classify diseases with higher accuracy.
Diagnosing burns in humans has become critical, as early identification can save lives. The manual process of burn diagnosis is time-consuming and complex, even for experienced doctors. Machine learning (ML) and deep convolutional neural network (CNN) models have emerged as the standard for medical image diagnosis. The ML-based approach typically requires handcrafted features for training, which may result in suboptimal performance. Conversely, DL-based methods automatically extract features, but designing a robust model is challenging. Additionally, shallow DL methods lack long-range feature dependency, decreasing efficiency in various applications. We implemented several deep CNN models, ResNeXt, VGG16, and AlexNet, for human burn diagnosis. The results obtained from these models were found to be less reliable since shallow deep CNN models need improved attention modules to preserve the feature dependencies. Therefore, in the proposed study, the feature map is divided into several categories, and the channel dependencies between any two channel mappings within a given class are highlighted. A spatial attention map is built by considering the links between features and their locations. Our attention-based model BuRnGANeXt50 kernel and convolutional layers are also optimized for human burn diagnosis. The earlier study classified the burn based on depth of graft and non-graft. We first classified the burn based on the degree. Subsequently, it is classified into graft and non-graft. Furthermore, the proposed model performance is evaluated on Burns_BIP_US_database. The sensitivity of the BuRnGANeXt50 is 97.22% and 99.14%, respectively, for classifying burns based on degree and depth. This model may be used for quick screening of burn patients and can be executed in the cloud or on a local machine. The code of the proposed method can be accessed at https://github.com/dhirujis02/Journal.git for the sake of reproducibility.
As digitization is increasing, threats to our data are also increasing at a faster pace. Generating fake videos does not require any particular type of knowledge, hardware, memory, or any computational device; however, its detection is challenging. Several methods in the past have solved the issue, but computation costs are still high and a highly efficient model has yet to be developed. Therefore, we proposed a new model architecture known as DFN (Deep Fake Network), which has the basic blocks of mobNet, a linear stack of separable convolution, max-pooling layers with Swish as an activation function, and XGBoost as a classifier to detect deepfake videos. The proposed model is more accurate compared to Xception, Efficient Net, and other state-of-the-art models. The DFN performance was tested on a DFDC (Deep Fake Detection Challenge) dataset. The proposed method achieved an accuracy of 93.28% and a precision of 91.03% with this dataset. In addition, training and validation loss was 0.14 and 0.17, respectively. Furthermore, we have taken care of all types of facial manipulations, making the model more robust, generalized, and lightweight, with the ability to detect all types of facial manipulations in videos.
Dispersal among species is an important factor that can govern the prey–predator model’s dynamics and cause a variety of spatial structures on a geographical scale. These structures form when passive diffusion interacts with the reaction part of the reaction–diffusion system in such a way that even if the reaction lacks symmetry-breaking capabilities, diffusion can destabilize the symmetry and allow the system to have them. In this article, we look at how dispersal affects the prey–predator model with a Hassell–Varley-type functional response when predators do not form tight groups. By considering linear stability, the temporal stability of the model and the conditions for Hopf bifurcation at feasible equilibrium are derived. We explored spatial stability in the presence of diffusion and developed the criterion for diffusion-driven instability. Using amplitude equations, we then investigated the selection of Turing patterns around the Turing bifurcation threshold. The examination of the stability of these amplitude equations led to the discovery of numerous Turing patterns. Finally, numerical simulations were performed to validate the outcomes of the analysis. The outcomes of the theoretical study and numerical simulation were accorded. Our findings demonstrate that spatial patterns are sensitive to dispersal and predator death rates.
A typical video record aggregation system requires the concurrent performance of a large number of image processing tasks, including but not limited to image acquisition, pre-processing, segmentation, feature extraction, verification, and description. These tasks must be executed with utmost precision to ensure smooth system performance. Among these tasks, feature extraction and selection are the most critical. Feature extraction involves converting the large-scale image data into smaller mathematical vectors, and this process requires great skill. Various feature extraction models are available, including wavelet, cosine, Fourier, histogram-based, and edge-based models. The key objective of any feature extraction model is to represent the image data with minimal attributes and no loss of information. In this study, we propose a novel feature-variance model that detects differences in video features and generates feature-reduced video frames. These frames are then fed into a GRU-based RNN model, which classifies them as either keyframes or non-keyframes. Keyframes are then extracted to create a summarized video, while non-keyframes are reduced. Various key-frame extraction models are also discussed in this section, followed by a detailed analysis of the proposed summarization model and its results. Finally, we present some interesting observations about the proposed model and suggest ways to improve it.
With the development of computer technology and artificial intelligence (AI), service robots are widely used in our daily life. At the same time, the manufacturing cost of the robots is too expensive for almost all small companies. The greatest technical limitations are the design of the service robot and the resource sharing of the robot groups. Path planning for robots is one of the issues playing an important role in every application of service robots. Path optimization, fast computation, and minimum computation time are required in all applications. This paper aims to propose the Google Cloud Computing Platform and Amazon Web Service (AWS) platforms for robot path planning. The aim is to identify the effect and impact of using a cloud computing platform for service robots. The cloud approach shifts the computation load from robots to the cloud server. Three different path-planning algorithms were considered to find the path for robots using the Google Cloud Computing Platform, while with AWS, three different types of environments, namely dense, moderate, and sparse, were selected to run the path-planning algorithms for robots. The paper presents the comparison and analysis of the results carried out for robot path planning using cloud services with that of the traditional approach. The proposed approach of using a cloud platform performs better in this case. The time factor is crucially diagnosed and presented in the paper. The major advantage derived from this experiment is that as the size of the environment increases, the respective relative delay decreases. This proves that increasing the scale of work can be beneficial by using cloud platforms. The result obtained using the proposed methodology proves that using cloud platforms improves the efficiency of path planning. The result reveals that using the cloud computing platform for service robots can change the entire perspective of using service robots in the future. The main advantage is that with the increase in the scale of services, the system remains stable, while the traditional system starts deteriorating in terms of performance.
Advances in digital neuroimaging technologies, i.e., MRI and CT scan technology, have radically changed illness diagnosis in the global healthcare system. Digital imaging technologies produce NIfTI images after scanning the patient’s body. COVID-19 spared on a worldwide effort to detect the lung infection. CT scans have been performed on billions of COVID-19 patients in recent years, resulting in a massive amount of NIfTI images being produced and communicated over the internet for diagnosis. The dissemination of these medical photographs over the internet has resulted in a significant problem for the healthcare system to maintain its integrity, protect its intellectual property rights, and address other ethical considerations. Another significant issue is how radiologists recognize tempered medical images, sometimes leading to the wrong diagnosis. Thus, the healthcare system requires a robust and reliable watermarking method for these images. Several image watermarking approaches for .jpg, .dcm, .png, .bmp, and other image formats have been developed, but no substantial contribution to NIfTI images (.nii format) has been made. This research suggests a hybrid watermarking method for NIfTI images that employs Slantlet Transform (SLT), Lifting Wavelet Transform (LWT), and Arnold Cat Map. The suggested technique performed well against various attacks. Compared to earlier approaches, the results show that this method is more robust and invisible.
The small-drone technology domain is the outcome of a breakthrough in technological advancement for drones. The Internet of Things (IoT) is used by drones to provide inter-location services for navigation. But, due to issues related to their architecture and design, drones are not immune to threats related to security and privacy. Establishing a secure and reliable network is essential to obtaining optimal performance from drones. While small drones offer promising avenues for growth in civil and defense industries, they are prone to attacks on safety, security, and privacy. The current architecture of small drones necessitates modifications to their data transformation and privacy mechanisms to align with domain requirements. This research paper investigates the latest trends in safety, security, and privacy related to drones, and the Internet of Drones (IoD), highlighting the importance of secure drone networks that are impervious to interceptions and intrusions. To mitigate cyber-security threats, the proposed framework incorporates intelligent machine learning models into the design and structure of IoT-aided drones, rendering adaptable and secure technology. Furthermore, in this work, a new dataset is constructed, a merged dataset comprising a drone dataset and two benchmark datasets. The proposed strategy outperforms the previous algorithms and achieves 99.89% accuracy on the drone dataset and 91.64% on the merged dataset. Overall, this intelligent framework gives a potential approach to improving the security and resilience of cyber–physical satellite systems, and IoT-aided aerial vehicle systems, addressing the rising security challenges in an interconnected world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.