Downloadable ContentDownload PDF
Design and implementation of test case prioritization in iValidator
Software defects cost the US economy an estimated 59.5 billion dollars each year . It has been suggested that improved testing, focused on by earlier identification and removal of defects, could save $22.2 billion . Various techniques have been proposed to increase the success rate in early defect detection, especially during integration and regression testing. Typically, such techniques rely on executing the test cases within a test scenario in some strategic order. Results from experiments conducted using such techniques indicate some degree of success in early detection of software defects   . The objective of this work is to facilitate early detection of defects using a test ordering technique based on test case prioritization. The proposed technique works by assigning a priority to each test case in a suite of test cases and then executing the test cases in descending priority order. The priority values are computed using an algorithm developed as part of this work, the algorithm computes priorities as a function of defects detected in the previous test runs (test history), presence of error prone code constructs and McCabe's complexity. This technique is intended to be used during integration testing and regression testing. The concepts developed in this project have been implemented and integrated into an open source test tool called -iValidator. The iValidator tool is capable of automatically executing a suite of test cases in some specified order. It is also capable of reporting the test results and maintaining a test history. In the iValidator nomenclature, test cases are called test steps. A test step is a composite entity that includes one or more software unit to be tested and the associated unit test descriptions. A test step can relate to testing of a use case or a sequence within a use case. It can also represent a collection of test cases used in functional testing of a software component. A collection of test steps make up a test description. Typically, each test description is associated with a System under Test (SuT) representing the higher-level software application being tested. In this work, the iValidator tool has been extended to provide capabilities for performing static code analysis for detecting error prone code constructs, and for computing McCabe's complexity values. The results are expressed in XML. The enhanced tool then uses these computed values, together with the previously recorded test history to compute the test priority for each test step in a test suite. In a typical test scenario, the enhanced iValidator tool is used to determine the test priorities of a collection of test steps in a test description. After that the tool is programmed to execute the test steps in descending order of their test priorities. The tool generates two reports upon completion of each test run. The first report describes the test execution results for the test steps in the test suite. The second report describes the test execution history, results from static analysis for error prone code constructs, and the McCabe complexity values. A prototype of the iValidator tool enhancements has been designed and implemented. The enhancements have been tested and validated with code production quality code.