Cloud computing has been the most adoptable technology in the recent times, and the database has alsomoved to cloud computing now, so we will look intothe details of database as a service and its functioning.This paper includes all the basic information aboutthe database as a service. The working of databaseas aservice and the challenges it is facing are discussed with an appropriate. The structure of database incloud computing and its working in collaboration with nodes is observed under database as a service. Thispaper also will highlight the important things to note down before adopting a database as a serviceprovides that is best amongst the other. The advantages and disadvantages of database as a service will letyou to decide either to use database as a service or not. Database as a service has already been adopted bymany e-commerce companies and those companies are getting benefits from this service
Significant modifications have been seen in healthcare facilities over the past two decades. With the use of IoT-enabled devices, the monitoring and analysis of patient diagnostic parameters is made considerably easy. The new technology shift for medical field is IoMT. However, the problem of privacy for patient data and the security of information still a point to ponder. This research proposed a prototype model to integrate the blockchain and IoMT for providing better analysis of patient health factors. The authors suggested IoMT data to be collected over Edge Computing gateway devices and forward to Cloud Gateway. The three-layered decision making structure ensures the integrity of the data. The further analysis of information collected over sensor-based devices is done in the Cloud IoT Central Hub service. To ensure the secrecy and compliance of the patient data, Smart Contracts are integrated. After the exchange of smart contracts, a block of information is broadcast over the health blockchain. The P2P network makes it viable to retain all health statistics and further processing of information. The paper describes the scenario and experimental setup for a COVID-19 data-set analyzed in the proposed prototype mode. After the collection of information and decision making, the block of data is sent across all peer nodes. Thus, the power of IoMT and blockchain makes it easy for the healthcare worker to diagnose and handle patient data with privacy. The IoMT is integrated with artificial intelligence to enable decision making based on the classification of data. The results are saved as transactions in the blockchain hyperledger. This study demonstrates the prototype model with test data in a testing network with two peer nodes.
Alzheimer’s disease is an incurable neurodegenerative disease that affects brain memory mainly in aged people. Alzheimer’s disease occurs worldwide and mainly affects people aged older than 65 years. Early diagnosis for accurate detection is needed for this disease. Manual diagnosis by health specialists is error prone and time consuming due to the large number of patients presenting with the disease. Various techniques have been applied to the diagnosis and classification of Alzheimer’s disease but there is a need for more accuracy in early diagnosis solutions. The model proposed in this research suggests a deep learning-based solution using DenseNet-169 and ResNet-50 CNN architectures for the diagnosis and classification of Alzheimer’s disease. The proposed model classifies Alzheimer’s disease into Non-Dementia, Very Mild Dementia, Mild Dementia, and Moderate Dementia. The DenseNet-169 architecture outperformed in the training and testing phases. The training and testing accuracy values for DenseNet-169 are 0.977 and 0.8382, while the accuracy values for ResNet-50 were 0.8870 and 0.8192. The proposed model is usable for real-time analysis and classification of Alzheimer’s disease.
The amount of data produced in scientific and commercial fields is growing dramatically. Correspondingly, big data technologies, such as Hadoop and Spark, have emerged to tackle the challenges of collecting, processing, and storing such large-scale data. Unfortunately, big data applications usually have performance issues and do not fully exploit a hardware infrastructure. One reason is that applications are developed using high-level programming languages that do not provide low-level system control in terms of performance of highly parallel programming models like message passing interface (MPI). Moreover, big data is considered a barrier of parallel programming models or accelerators (e.g., CUDA and OpenCL). Therefore, the aim of this study is to investigate how the performance of big data applications can be enhanced without sacrificing the power consumption of a hardware infrastructure. A Hybrid Spark MPI OpenACC (HSMO) system is proposed for integrating Spark as a big data programming model, with MPI and OpenACC as parallel programming models. Such integration brings together the advantages of each programming model and provides greater effectiveness. To enhance performance without sacrificing power consumption, the integration approach needs to exploit the hardware infrastructure in an intelligent manner. For achieving this performance enhancement, a mapping technique is proposed that is built based on the application’s virtual topology as well as the physical topology of the undelaying resources. To the best of our knowledge, there is no existing method in big data applications related to utilizing graphics processing units (GPUs), which are now an essential part of high-performance computing (HPC) as a powerful resource for fast computation.
The continuous destruction and frauds prevailing due to phishing URLs make it an indispensable area for research. Various techniques are adopted in the detection process, including neural networks, machine learning, or hybrid techniques. A novel detection model is proposed that uses data mining with the Particle Swarm Optimization technique (PSO) to increase and empower the method of detecting phishing URLs. Feature selection based on various techniques to identify the phishing candidates from the URL is conducted. In this approach, the features mined from the URL are extracted using data mining rules. The features are selected on the basis of URL structure. The classification of these features identified by the data mining rules is done using PSO techniques. The selection of features with PSO optimization makes it possible to identify phishing URLs. Using a large number of rule identifiers, the true positive rate for the identification of phishing URLs is maximized in this approach. The experiments show that feature selection using data mining and particle swarm optimization helps tremendously identify the phishing URLs based on the structure of the URL itself. Moreover, it can minimize processing time for identifying the phishing website instead. So, the approach can be beneficial to identify such URLs over the existing contemporary detecting models proposed before.
The COVID-19 pandemic affects individuals in many ways and has spread worldwide. Current methods of COVID-19 detection are based on physicians analyzing the patient’s symptoms. Machine learning with deep learning approaches applied to image processing techniques also plays a role in identifying COVID-19 from minor symptoms. The problem is that such models do not provide high performance, which impacts timely decision-making. Early disease detection in many places is limited due to the lack of expensive resources. This study employed pre-implemented instances of a convolutional neural network and Darknet to process CT scans and X-ray images. Results show that the proposed new models outperformed the state-of-the-art methods by approximately 10% in accuracy. The results will help physicians and the health care system make preemptive decisions regarding patient health. The current approach might be used jointly with existing health care systems to detect and monitor cases of COVID-19 disease quickly.
Big data can be considered to be at the forefront of the present and future research activities. The volume of data needing to be processed is growing dramatically in both velocity and variety. In response, many big data technologies have emerged to tackle the challenges of collecting, processing and storing such large-scale datasets. Highperformance computing (HPC) is a technology that is used to perform computations as fast as possible. This is achieved by integrating heterogeneous hardware and crafting software and algorithms to exploit the parallelism provided by HPC. The performance capabilities afforded by HPC have made it an attractive environment for supporting scientific workflows and big data computing. This has led to a convergence of the HPC and big data fields. However, big data applications usually do not fully exploit the performance available in HPC clusters. This is so due to such applications being written in high-level programming languages and do not provide support for exploiting parallelism as do other parallel programming models. The objective of this research paper is to enhance the performance of big data applications on HPC clusters without sacrificing the power consumption of HPC. This can be achieved by building a parallel HPC-based Resource Management System to exploit the capabilities of HPC resources efficiently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.