A heterogeneous system can be portrayed as a variety of unlike resources that can be locally or geologically spread, which is exploited to implement data-intensive and computationally intensive applications. The competence of implementing the scientific workflow applications on heterogeneous systems is determined by the approaches utilized to allocate the tasks to the proper resources. Cost and time necessity are evolving as different vital concerns of cloud computing environments such as data centers. In the area of scientific workflows, the difficulties of increased cost and time are highly challenging, as they elicit rigorous computational tasks over the communication network. For example, it was discovered that the time to execute a task in an unsuited resource consumes more cost and time in the cloud data centers. In this paper, a new cost- and time-efficient planning algorithm for scientific workflow scheduling has been proposed for heterogeneous systems in the cloud based upon the Predict Optimistic Time and Cost (POTC). The proposed algorithm computes the rank based not only on the completion time of the current task but also on the successor node in the critical path. Under a tight deadline, the running time of the workflow and the transfer cost are reduced by using this technique. The proposed approach is evaluated using true cases of data-exhaustive workflows compared with other algorithms from written works. The test result shows that our proposed method can remarkably decrease the cost and time of the experimented workflows while ensuring a better mapping of the task to the resource. In terms of makespan, speedup, and efficiency, the proposed algorithm surpasses the current existing algorithms—such as Endpoint communication contention-aware List Scheduling Heuristic (ELSH)), Predict Earliest Finish Time (PEFT), Budget-and Deadline-constrained heuristic-based upon HEFT (BDHEFT), Minimal Optimistic Processing Time (MOPT) and Predict Earlier Finish Time (PEFT)—while holding the same time complexity.
Traffic congestion is one of the major problems faced in our day-to-day life. The objective of this paper is to provide an innovative method to solve traffic congestion. In the present day scenario, there are numerous Traffic signals which delay the time taken to reach a destination. In order to overcome this problem, we need to synchronize the signals. The goal of this project is to develop a system which synchronizes the signals so that congestion is managed in better manner. Here, signals across neighboring junctions are synchronized in cooperative method and congestion will be cleared in accordance to the traffic density as well as direction of traffic flows. This paper also uses vehicle speed patterns to predict the traffic flow between two nearby junctions. The traffic flow patterns across all four directions will be analyzed and the same is mapped with various fuzzy rules. In accordance to such mapping, a relevant fuzzy rule will fire required module in real time traffic signal. Hence, the traffic congestion will be controlled in dynamic as well as adaptive manner. A vehicle should travel at a particular speed from one junction to another and we calculate the time limit for changing the signal based on that speed. The core objective of this paper is to provide a cost-effective way to manage the traffic and making hassle free driving.
Scientific workflows are large scale loosely coupled submissions that are used by Computational Scientists. They are composed of multiple tasks with dependencies between them and are composed of many fine granular tasks. Task clustering is an optimization method that combines multiple tasks into a single job such that task execution time and system overhead is reduced and thus the whole performance is improved in a cloud environment. Though existing task clustering algorithms has significantly reduced the System overhead, yet dependencies among the tasks are not well-thought-out. This work examines the features of task by which the tasks can be clustered and developed proficient task clustering algorithm. In this work two task clustering ideas were proposed namely Horizontal Coupling Factor (HCF) based clustering and Horizontal Processing Cost (HPC) based Task Clustering. Next, the proposed algorithm have been evaluated and tested for various real world applications and the experiment results shows that the proposed approach suits best for data intensive and Compute intensive applications. The obtained results showed that the HCF and HPC task clustering strategies can significantly improve the performance by reducing the task execution time and inter task Communication delay
Healthcare is a promising application of the cloud computing technology. Healthcare network over the cloud has been described in this paper. The existing processes for patients’ vital data collection require a great deal of labor work to collect, input and analyze the information. These processes are usually slow and error prone, introducing a latency that prevents real-time data accessibility. This scenario restrains the clinical diagnostics and monitoring capabilities. A solution has been proposed to automate this process by installing health kiosk and integrate the devices. The information becomes available in the “cloud” from where it can be processed by expert systems and or distributed to medical staff. This design is able to make the system more user-friendly while retaining all the benefits of more complicated processes.
Trillions of dollars are spent each year on health care.The Department of Health Research (DHR) keeps track of avariety of health care indicators across the country, resulting in a large geospatially multivariate data set. This paper presents the various techniques, tools, technology and algorithms for the representation of large scale data which aims to provide good overviews of complete structures and the content of the data in one display space. The ability to visualize multiple variables on the map and compare them using a table and charts at the same time can provide valuable insights which might not be possible to obtain from current tools. . There are large numbers of data visualization techniques which have been developed over the last decade to support the exploration of large data sets. The techniques and tools discussed in this paper are based on the data type to be visualized, the visualization technique and the interaction and distortion technique.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.