Coronavirus (COVID-19) has spread throughout the world, causing mayhem from January 2020 to this day. Owing to its rapidly spreading existence and high death count, the WHO has classified it as a pandemic. Biomedical engineers, virologists, epidemiologists, and people from other medical fields are working to help contain this epidemic as soon as possible. The virus incubates for five days in the human body and then begins displaying symptoms, in some cases, as late as 27 days. In some instances, CT scan based diagnosis has been found to have better sensitivity than RT-PCR, which is currently the gold standard for COVID-19 diagnosis. Lung conditions relevant to COVID-19 in CT scans are ground-glass opacity (GGO), consolidation, and pleural effusion. In this paper, two segmentation tasks are performed to predict lung spaces (segregated from ribcage and flesh in Chest CT) and COVID-19 anomalies from chest CT scans. A 2D deep learning architecture with U-Net as its backbone is proposed to solve both the segmentation tasks. It is observed that change in hyperparameters such as number of filters in down and up sampling layers, addition of attention gates, addition of spatial pyramid pooling as basic block and maintaining the homogeneity of 32 filters after each down-sampling block resulted in a good performance. The proposed approach is assessed using publically available datasets from GitHub and Kaggle. Model performance is evaluated in terms of F1-Score, Mean intersection over union (Mean IoU). It is noted that the proposed approach results in 97.31% of F1-Score and 84.6% of Mean IoU. The experimental results illustrate that the proposed approach using U-Net architecture as backbone with the changes in hyperparameters shows better results in comparison to existing U-Net architecture and attention U-net architecture. The study also recommends how this methodology can be integrated into the workflow of healthcare systems to help control the spread of COVID-19.
Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selective algorithm for allocation of cloud resources to end-users on-demand basis. This algorithm is based on min-min and max-min algorithms. These are two conventional task scheduling algorithm. The selective algorithm uses certain heuristics to select between the two algorithms so that overall makespan of tasks on the machines is minimized. The tasks are scheduled on machines in either space shared or time shared manner. We evaluate our provisioning heuristics using a cloud simulator, called CloudSim. We also compared our approach to the statistics obtained when provisioning of resources was done in First-Cum-First-Serve(FCFS) manner. The experimental results show that overall makespan of tasks on given set of VMs minimizes significantly in different scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.