A Software Testing and Reliability Early Warning (STREW) Metric Suite

Show simple item record

dc.contributor.advisor Dr. Christopher G.Healey, Committee Member en_US
dc.contributor.advisor Dr. Mladen A.Vouk, Committee Member en_US
dc.contributor.advisor Dr. Laurie A.Williams, Committee Chair en_US
dc.contributor.advisor Dr. Jason A.Osborne, Committee Member en_US
dc.contributor.author Nagappan, Nachiappan en_US
dc.date.accessioned 2010-04-02T19:05:10Z
dc.date.available 2010-04-02T19:05:10Z
dc.date.issued 2005-02-14 en_US
dc.identifier.other etd-02112005-121445 en_US
dc.identifier.uri http://www.lib.ncsu.edu/resolver/1840.16/4964
dc.description.abstract The demand for quality in software applications has grown, and awareness of software testing-related issues plays an important role towards that. Unfortunately in industrial practice, information on software field quality of a product tends to become available too late in the software lifecycle to affordably guide corrective actions. An important step towards remediation of this problem lies in the ability to provide an early estimation of post-release field quality. This dissertation presents a suite of nine in-process metrics, the Software Testing and Reliability Early Warning (STREW) metric suite, that leverages the software testing effort to provide (1) an estimate of post-release field quality early in software development phases, and (2) a color-coded, feedback to the developers on the quality of their testing effort to identify areas that could benefit from more testing. We built and validated our model via a three-phase case study approach which progressively involved 22 small-scale academic projects, 27 medium-sized open source projects, and five large-scale industrial projects. The ability of the STREW metric suite to estimate post-release field quality was evaluated using statistical regression models in the three different environments. The error in estimation and the sensitivity of the predictions indicate the STREW metric suite can effectively be used to predict post-release software field quality. Further, the test quality feedback was found to be statistically significant with the post-release software quality, indicating the ability of the STREW metrics to provide meaningful feedback on the quality of the testing effort. en_US
dc.rights I hereby certify that, if appropriate, I have obtained and attached hereto a written permission statement from the owner(s) of each third party copyrighted matter to be included in my thesis, dissertation, or project report, allowing distribution as specified below. I certify that the version I submitted is the same as that approved by my advisory committee. I hereby grant to NC State University or its agents the non-exclusive license to archive and make accessible, under the conditions specified below, my thesis, dissertation, or project report in whole or in part in all forms of media, now or hereafter known. I retain all other ownership rights to the copyright of the thesis, dissertation or project report. I also retain the right to use in future works (such as articles or books) all or part of this thesis, dissertation, or project report. en_US
dc.subject Software metrics en_US
dc.subject Software reliability en_US
dc.subject Software testing en_US
dc.subject Software field quality en_US
dc.title A Software Testing and Reliability Early Warning (STREW) Metric Suite en_US
dc.degree.name PhD en_US
dc.degree.level dissertation en_US
dc.degree.discipline Computer Science en_US


Files in this item

Files Size Format View
etd.pdf 682.2Kb PDF View/Open

This item appears in the following Collection(s)

Show simple item record