Software reliability is an active field of research over the past 35 years. Software developers often feel the necessity of selecting an appropriate software reliability model that not only best depicts the past history but also can predict reasonably well the future behavior of the software being developed in respect of detected bugs and errors. This helps in estimating in advance the time of delivery as well as the overall cost of the software project. Several models have been proposed in literature for estimating software reliability under different environments. However from amongst the models developed thus far, there is not a single model that best fits all or even a majority of the real life situations and so can be universally recommended.In this study, a technique is proposed to serve as a guide for the selection of an appropriate software reliability model for an ongoing software development project. The proposed technique has been tested on various sets of available software development project datasets and it has been observed that model recommended on the basis of proposed technique is better in comparison with models recommended on the basis of other models proposed.
General Terms
SUMMARY
This research article formulates contemporary approach named multi‐objective reliability‐based workflow scheduler. Numerous strategies have been proposed in the past to prioritize and map the tasks to cloud resources. Though the recent studies lead to efficient solutions however they are restrained in terms of performance due to lack of resource consideration based on utilization rate and reliability index. It is crucial to consider reliability parameter while mapping tasks onto the virtual machines and not just the reliability value, but the cost incurred must also be minimized. To this end, the proposed strategy has been categorized into four modules, (i) scrutiny of reliable VMs, (ii) task ranking, (iii) optimizing the task re‐ordering using flower pollination optimization, and (iv) task mapping onto the VM. It intends to map task onto the most suitable machine in terms of makespan, efficiency, and incurred cost. In the experimental setup, four scientific workflows have been considered namely, LIGO, Genome, Cybershake, and Montage, they have been tested on the proposed approach while making comparison with the existing approaches namely FPA, GWO, and GA. The simulation results justified the claims by allocating resources to the cloudlets efficiently and stabilizing all the aforementioned parameters by attaining performance measures adequately.
Underwater wireless sensor network (UWSN) has emerged as one of the most popular network technologies owing to its applicability to offshore searches, and underwater monitoring and exploration applications. It has been shown to be useful in the fields of investigations and surveillance,
and in assisting with and offering solutions to water-based calamities. Reliability in the underwater environment has caused researchers to direct attention towards improving the overall efficiency and energy utilisation of the network. In the present paper, reliable node quester (RNQ) algorithm
has been formulated to calculate the node reliability for numerous parameters such as success rate, transmission time, and the affordability, congestion and stability of the nodes. The present paper highlights the data-forwarding mechanism of the nodes to enhance overall network reliability
by (i) reducing the packet drop rate; (ii) increasing the packet delivery ratio; and (iii) minimising the energy consumption. Simulation results further support the proposed strategy by ensuring the network lifespan and detection accuracy.
Underwater wireless sensor networks (UWSNs) have gained importance as well as diverted attention of many researchers, domain experts to a great extent in recent past. The devices used for UWSN deployment are resource-constrained like storage issue, low processing speed, as well are vulnerable to a wide class of security threats and malicious attacks, which affect reliable communication. For reliable data delivery, a system should include packet delivery ratio, battery life, delays incurred, and energy consumption, etc. Numerous reliability models for underwater networks have been designed to incorporate the parameters and performance metrics in optimized manner. The chapter deals with focusing on such models and their efficiency in terms of battery life, packet loss, error handling mechanism, and network delays. Further, it is also explained how and why the error-controlled schemes should be designed and implemented in order to incorporate reliable data delivery in limited resources-constraints of UWSN along with the consideration of efficiency and performance concerns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.