Digitization and automation have always had an immense impact on healthcare. It embraces every new and advanced technology. Recently the world has witnessed the prominence of the metaverse which is an emerging technology in digital space. The metaverse has huge potential to provide a plethora of health services seamlessly to patients and medical professionals with an immersive experience. This paper proposes the amalgamation of artificial intelligence and blockchain in the metaverse to provide better, faster, and more secure healthcare facilities in digital space with a realistic experience. Our proposed architecture can be summarized as follows. It consists of three environments, namely the doctor’s environment, the patient’s environment, and the metaverse environment. The doctors and patients interact in a metaverse environment assisted by blockchain technology which ensures the safety, security, and privacy of data. The metaverse environment is the main part of our proposed architecture. The doctors, patients, and nurses enter this environment by registering on the blockchain and they are represented by avatars in the metaverse environment. All the consultation activities between the doctor and the patient will be recorded and the data, i.e., images, speech, text, videos, clinical data, etc., will be gathered, transferred, and stored on the blockchain. These data are used for disease prediction and diagnosis by explainable artificial intelligence (XAI) models. The GradCAM and LIME approaches of XAI provide logical reasoning for the prediction of diseases and ensure trust, explainability, interpretability, and transparency regarding the diagnosis and prediction of diseases. Blockchain technology provides data security for patients while enabling transparency, traceability, and immutability regarding their data. These features of blockchain ensure trust among the patients regarding their data. Consequently, this proposed architecture ensures transparency and trust regarding both the diagnosis of diseases and the data security of the patient. We also explored the building block technologies of the metaverse. Furthermore, we also investigated the advantages and challenges of a metaverse in healthcare.
Among pets, dogs are very famous in the whole world. The owners of dogs are very cautious about the well-being of dogs. The well-being of dogs can be ensured by continuous monitoring of their activities. Studies related to activity detection have gained much popularity due to the significant progress in sensor technology during the last few years. Automatic monitoring of pet applications includes real-time monitoring systems and surveillance which detect the pets with high accuracy using the latest pet activity classification techniques. The revolution in the domain of technology has allowed us to obtain better results using conventional techniques. Convolutional neural networks (CNNs) 1D recently become a cutting-edge approach for signal processing-based systems such as patient-individual ECG categorization, sensor-based health monitoring systems, and anomaly identification in manufacturing areas. Adaptive and compact 1D models have several advantages over their conventional 2D counterparts. A limited dataset is sufficient to train a 1D CNN efficiently while 2D CNNs require a plethora of data for training. Its architecture is not very complicated, so it is suitable for real-time detection of activities. The main goal of this study is to develop a state-of-the-art system that can detect and classify the activities based on sensors' data (accelerometer, and gyroscope. We proposed a 1D CNN-based system for pet activity detection. The objective of this study was to recognize ten pet activities such as walking, sitting, down, staying, eating, sideway, jumping, running, shaking, and nose work respectively, using wearable sensor devices based on deep learning technique. The data collection procedure for this study was conducted with 10 dogs of different breeds, sex (male=7, female = 3), age (age = 4±3), and sizes (small, medium, large) in a healthy environment. After collecting the data, the following steps, namely data synchronization, and data preprocessing were considered to remove the irrelevant data from the dataset. To overcome imbalanced problems in the dataset we used the class-weight technique. Subsequently, we applied 1D CNN algorithm using the class-weight technique. The model with the class-weight technique showed 99.70% training accuracy and 96.85% validation accuracy. The 1D CNN approach will be helpful for real-time monitoring of activities and for tracing the behavior of dogs.
The natural phenomenon of harmful algae bloom (HAB) has a bad impact on the quality of pure and freshwater. It increases the risk to human health, water bodies and overall aquatic ecosystem. It is necessary to continuously monitor and perform proper action against HAB. The inspection of algae blooms by using conventional methods, like algae detection under microscopes, is a difficult, expensive, and time-consuming task, however, computer vision-based deep learning models play a vital role in identifying and detecting harmful algae growth in aquatic ecosystems and water reservoirs. Many studies have been conducted to address harmful algae growth by using a CNN based model, however, the YOLO model is considered more accurate in identifying the algae. This advanced deep learning method is extensively used to detect algae and classify them according to their corresponding category. In this study, we used various versions of the convolution neural network (CNN) based on the You Only Look Once (YOLO) model. Recently YOLOv5 has been getting more attention due to its performance in real-time object detection. We performed a series of experiments on our custom microscopic images dataset by using YOLOv3, YOLOv4, and YOLOv5 to detect and classify the harmful algae bloom (HAB) of four classes. We used pre-processing techniques to enhance the quantity of data. The mean average precision (mAP) of YOLOv3, YOLOv4, and YOLO v5 is 75.3%, 83.0%, and 91.0% respectively. For the monitoring of algae bloom in freshwater, computer-aided based systems are very helpful and effective. To the best of our knowledge, this work is pioneering in the AI community for applying the YOLO models to detect algae and classify from microscopic images.
The novel coronavirus (COVID-19), which emerged as a pandemic, has engulfed so many lives and affected millions of people across the world since December 2019. Although this disease is under control nowadays, yet it is still affecting people in many countries. The traditional way of diagnosis is time taking, less efficient, and has a low rate of detection of this disease. Therefore, there is a need for an automatic system that expedites the diagnosis process while retaining its performance and accuracy. Artificial intelligence (AI) technologies such as machine learning (ML) and deep learning (DL) potentially provide powerful solutions to address this problem. In this study, a state-of-the-art CNN model densely connected squeeze convolutional neural network (DCSCNN) has been developed for the classification of X-ray images of COVID-19, pneumonia, normal, and lung opacity patients. Data were collected from different sources. We applied different preprocessing techniques to enhance the quality of images so that our model could learn accurately and give optimal performance. Moreover, the attention regions and decisions of the AI model were visualized using the Grad-CAM and LIME methods. The DCSCNN combines the strength of the Dense and Squeeze networks. In our experiment, seven kinds of classification have been performed, in which six are binary classifications (COVID vs. normal, COVID vs. lung opacity, lung opacity vs. normal, COVID vs. pneumonia, pneumonia vs. lung opacity, pneumonia vs. normal) and one is multiclass classification (COVID vs. pneumonia vs. lung opacity vs. normal). The main contributions of this paper are as follows. First, the development of the DCSNN model which is capable of performing binary classification as well as multiclass classification with excellent classification accuracy. Second, to ensure trust, transparency, and explainability of the model, we applied two popular Explainable AI techniques (XAI). i.e., Grad-CAM and LIME. These techniques helped to address the black-box nature of the model while improving the trust, transparency, and explainability of the model. Our proposed DCSCNN model achieved an accuracy of 98.8% for the classification of COVID-19 vs normal, followed by COVID-19 vs. lung opacity: 98.2%, lung opacity vs. normal: 97.2%, COVID-19 vs. pneumonia: 96.4%, pneumonia vs. lung opacity: 95.8%, pneumonia vs. normal: 97.4%, and lastly for multiclass classification of all the four classes i.e., COVID vs. pneumonia vs. lung opacity vs. normal: 94.7%, respectively. The DCSCNN model provides excellent classification performance consequently, helping doctors to diagnose diseases quickly and efficiently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.