“…In situations where there is limited historical data or when codebases evolve rapidly, the predictive accuracy of the model may be compromised. To overcome this limitation, continuous refinement of the model and exploration of more advanced machine learning techniques are necessary [6].…”
In the constantly evolving world of software development, it is crucial to have effective testing methodologies in order to ensure the strength and reliability of applications. This scholarly article presents a new and intelligent approach to test execution that is driven by code and utilizes machine learning to greatly improve adaptability and accuracy in testing processes. Traditional testing methods often struggle to handle changes in code, resulting in less than optimal test execution. Our proposed method utilizes machine learning techniques to predict the impact of code modifications on test results, allowing for a more precise test execution strategy. We have demonstrated significant improvements in test execution efficiency, reducing unnecessary tests and speeding up feedback cycles. The following discussion examines these findings, addresses potential limitations, and suggests future areas for improvement and expansion. Notably, our methodology explains how Git commits aid in updating features, and how the machine learning model predicts the updated feature names. This predicted feature name is then integrated into BehaviorDriven Development (BDD) test selection and execution using standard BDD frameworks. By seamlessly incorporating machine learning into the testing process, developers can achieve greater precision and effectiveness, making significant progress in overcoming challenges posed by changes in code in modern development environments.
“…In situations where there is limited historical data or when codebases evolve rapidly, the predictive accuracy of the model may be compromised. To overcome this limitation, continuous refinement of the model and exploration of more advanced machine learning techniques are necessary [6].…”
In the constantly evolving world of software development, it is crucial to have effective testing methodologies in order to ensure the strength and reliability of applications. This scholarly article presents a new and intelligent approach to test execution that is driven by code and utilizes machine learning to greatly improve adaptability and accuracy in testing processes. Traditional testing methods often struggle to handle changes in code, resulting in less than optimal test execution. Our proposed method utilizes machine learning techniques to predict the impact of code modifications on test results, allowing for a more precise test execution strategy. We have demonstrated significant improvements in test execution efficiency, reducing unnecessary tests and speeding up feedback cycles. The following discussion examines these findings, addresses potential limitations, and suggests future areas for improvement and expansion. Notably, our methodology explains how Git commits aid in updating features, and how the machine learning model predicts the updated feature names. This predicted feature name is then integrated into BehaviorDriven Development (BDD) test selection and execution using standard BDD frameworks. By seamlessly incorporating machine learning into the testing process, developers can achieve greater precision and effectiveness, making significant progress in overcoming challenges posed by changes in code in modern development environments.
“…Some of the techniques are not suitable for RT as they are not suitable for larger test suits [40]. The other reported issues are related to the path coverage as the presented techniques are not suitable in terms of path coverage [41]. Fulfilment of the requirements is an important issue that is not handled by the existing techniques and focus on code check only [42].…”
Regression testing is a widely used approach to confirm the correct functionality of the software in incremental development. The use of test cases makes it easier to test the ripple effect of changed requirements. Rigorous testing may help in meeting the quality criteria that is based on the conformance to the requirements as given by the intended stakeholders. However, a minimized and prioritized set of test cases may reduce the efforts and time required for testing while focusing on the timely delivery of the software application. In this research, a technique named TestReduce has been presented to get a minimal set of test cases based on high priority to ensure that the web application meets the required quality criteria. A new technique TestReduce is proposed with a blend of genetic algorithm to find an optimized and minimal set of test cases. The ultimate objective associated with this study is to provide a technique that may solve the minimization problem of regression test cases in the case of linked requirements. In this research, the 100-Dollar prioritization approach is used to define the priority of the new requirements.
“…There were several articles made describing different kinds of approaches, such as selection-and priority-based, [4][5][6] solver-based, 7,8 evolutionary, 9,10 graph-based, 11,12 machine learning and data-mining ones, 3,13 and other search-based methods. 14 Not only individual approaches have been discussed in the literature, but efforts have been made to conduct meta-analyses and survey reviews.…”
“…One optimal solution to this problem is to find a minimum‐sized test suite, achieved through test suite minimization (TSM) by reducing the base one through selectively removing redundant tests based on some priority system, known as test case selection (TCS) or test case prioritization while maintaining code coverage metrics 1 . However, this approach poses challenges as some algorithms may decrease coverage metrics in specific situations (e.g., clusterization 3 without additional management).…”
Software projects grow larger every year, which, in turn, makes the testing process harder. One of the most useful methods for testing large projects is unit‐test generation. However, some tests can repeatedly cover the same parts of the code, making it difficult to maintain a growing test codebase. In software testing, test suite minimization plays a crucial role in reducing the cost of testing and improving the efficiency of the testing process. In this paper, we provide an extensible minimization engine that detects redundant tests using one of the supported minimization algorithms without changing the coverage metrics. We also performed a comprehensive analysis of existing approaches and techniques, developed an engine structure, and implemented multiple algorithms of different kinds. Finally, we evaluated our tool on various open‐source projects to demonstrate its effectiveness and efficiency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.