Title: Estimating Reliability During Non-Representative Testing S.J. Zeil Old Dominion University Abstract: There is a substantial body of literature devoted to "directed" testing methods, which manipulate the choice of test inputs so as to increase the probability and/or rate of fault detection. These include most well-known testing methods, including functional and structural testing, data flow coverage, mutation analysis, and domain testing. A recurring theme in this literature has been the problem of knowing when to STOP testing. In contrast, a variety of reliability growth models provide quantified measures of test effectiveness in terms that are directly relevant to project management, but at the cost of restricting testing to "representative" selection, in which test data is chosen to reflect the operational distribution of the program's inputs. We have been working to find a common ground between the areas of directed testing and reliability modeling: -- Exploring reliability models that can be applied to non-representative test processes, -- Developing improved data collection processes for reliability growth modeling under both representative and directed testing. Our apporach is based not upon program failure rates, as is conventional, but upon the failure rates of individual faults. Thus we shift the point of observation from testing to debugging. What matters is not how long we waited to obtain a failure, but how often the corrected fault would have manifested in actual use. (This work was developed jointly with Brian Mitchell.)