Abstract:The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy). By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE)). The values of model performance statistics (R 2 , adjusted R 2 , Mean Square Error (MSE) and standard error) also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers.
A large number of software reliability growth models (SRGMs) have been studied to estimate the reliability of software systems over the past 40 years. Different models have been developed upon different sets of assumptions. A few models were developed in a practical environment by considering testing effort, testing coverage, time delay fault correction, and fault reduction factor. Generally, SRGMs are not dataset independent, and thus the selection of an appropriate SRGM for use in a specific application is a challenging task in the software reliability area. The wrong selection of SRGMs might create a wrong estimation of reliability and consequently a delay in the release of the software. To overcome this problem, we have proposed a unique hybrid entropy weight‐based multi‐criteria decision‐making (MCDM) method and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) approach for the selection of a suitable SRGM and applied it for optimal selection and ranking of SRGMs. The proposed hybrid approach identifies the need of the relative importance of criteria for a given application without which inter‐criterion comparison cannot be accomplished. It requires a set of model selection criteria along with a set of SRGMs and their level of criteria for optimal selection. It successfully displays the result in terms of a merit value which is used to rank the SRGMs. The proposed approach has been validated on two data sets of the real software failure. The results of the study play a vital role for the decision maker to judge the suitability of the SRGM.
This paper explores the use of a particle filter—a data assimilation method—to incorporate real-time data into an agent-based model. We apply the method to a simulation of real pedestrians moving through the concourse of Grand Central Terminal in New York City (USA). The results show that the particle filter does not perform well due to (i) the unpredictable behaviour of some pedestrians and (ii) because the filter does not optimise the categorical agent parameters that are characteristic of this type of model. This problem only arises because the experiments use real-world pedestrian movement data, rather than simulated, hypothetical data, as is more common. We point to a potential solution that involves resampling some of the variables in a particle, such as the locations of the agents in space, but keeps other variables such as the agents’ choice of destination. This research illustrates the importance of including real-world data and provides a proof of concept for the application of an improved particle filter to an agent-based model. The obstacles and solutions discussed have important implications for future work that is focused on building large-scale real-time agent-based models.
Software testing involves verification and validation of the software to meet the requirements elucidated by customers in the earlier phases and to subsequently increase software reliability. Around half of the resources, such as manpower and CPU time are consumed and a major portion of the total cost of developing the software is incurred in testing phase, making it the most crucial and time-consuming phase of a software development lifecycle (SDLC). Also the fault detection process (FDP) and fault correction process (FCP) are the important processes in SDLC. A number of software reliability growth models (SRGM) have been proposed in the last four decades to capture the time lag between detected and corrected faults. But most of the models are discussed under static environment. The purpose of this paper is to allocate the resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. An elaborate optimization policy based on optimal control theory for resource allocation with the objective to minimize the cost is proposed. Further, genetic algorithm is applied to obtain the optimum value of detection and correction efforts which minimizes the cost. Numerical example is given in support of the above theoretical result. The experimental results help the project manager to identify the contribution of model parameters and their weight.
Testing life cycle poses a problem of achieving a high level of software reliability while achieving an optimal release time for the software. To enhance the reliability of the software, retain the market potential for the software and reduce the testing cost, the enterprise needs to know when to release the software and when to stop testing. To achieve this, enterprises usually release their product earlier in market and then release patches subsequently. Software patching is a process through which enterprises debug, update, or enhance their software. Software patching when used as a debugging process ensures an optimal release for the product, increasing the reliability of the software while reducing the economic overhead of testing. Today, due to the diverse and distributed nature of software, its journey in the market is dynamic, making patching an inherent aspect of testing. A patch is a piece of software designed to update a computer program or its supporting data to fix or improve it. Researchers have worked in the field to minimize the testing cost, but so far, reliability has not been considered in the models for optimal time scheduling using patching. In this paper, we discuss reliability, which is a major attribute of the quality of software. Thus, to address the issues of testing cost, release time of software, and a desirable reliability level, we propose a reliability growth model implementing software patching to make the software system reliable and cost effective. The numeric illustration has been implemented using real-life software failure data set.
New products are appearing in the marketplace at an ever-increasing step. Their launching is either market driven, or technology driven. Pricing and warranty policies play a vital role in launching of a new product, consequently growth of a company. In this paper a decision model is proposed to determine the pricing and warranty polices of a newly launched product considering free replacement during warranty period, and reworking during production process. We have assumed that the reworking is performed for the defective items, which are produced when a machine shifts from in-control state to out-of-control state to make them perfect. Profit function is formulated by combining the diffusion models and cost model. Structured optimal policies are proposed using optimal control theory and genetic algorithm solution approach is employed to explore the optimum values of price and warranty for every period of the product's life cycle. Numerical example is presented considering different values of model parameters. Further, sensitivity analysis is performed to study the impact of model parameters on the profit model. The results of the paper will be greatly useful for the decision-makers, as it allows them to identify the role of the selected parameters during the entire life cycle of the product, and to study the long-term policy of a newly launched product.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.