A Software Testing and Reliability Early Warning (STREW) Metric Suite

Abstract

The demand for quality in software applications has grown, and awareness of software testing-related issues plays an important role towards that. Unfortunately in industrial practice, information on software field quality of a product tends to become available too late in the software lifecycle to affordably guide corrective actions. An important step towards remediation of this problem lies in the ability to provide an early estimation of post-release field quality. This dissertation presents a suite of nine in-process metrics, the Software Testing and Reliability Early Warning (STREW) metric suite, that leverages the software testing effort to provide (1) an estimate of post-release field quality early in software development phases, and (2) a color-coded, feedback to the developers on the quality of their testing effort to identify areas that could benefit from more testing. We built and validated our model via a three-phase case study approach which progressively involved 22 small-scale academic projects, 27 medium-sized open source projects, and five large-scale industrial projects. The ability of the STREW metric suite to estimate post-release field quality was evaluated using statistical regression models in the three different environments. The error in estimation and the sensitivity of the predictions indicate the STREW metric suite can effectively be used to predict post-release software field quality. Further, the test quality feedback was found to be statistically significant with the post-release software quality, indicating the ability of the STREW metrics to provide meaningful feedback on the quality of the testing effort.

Description

Keywords

Software metrics, Software reliability, Software testing, Software field quality

Citation

Degree

PhD

Discipline

Computer Science

Collections