On the Software Trustability Assessment Grigore Albeanu albeanu@math.math.unibuc.ro Bucharest University, Faculty of Mathematics, Academiei 14, RO-70109, Bucharest, Romania Fl. Popentiu Vladicescu Fl.Popentiu@city.ac.uk City University, Department of Electrical, Electronic & Information Engineering, Northampton Square, London EC1V, OHB, UK Abstract This paper considers the concept of trustability introduced by Howden 3 and outlines some important facts concerning the usage of the trustability as an important concept in software quality management. The software trustability assessment is also considered and detailed. 1. Introduction Software reliability is the probability of a software product operating for a given period of time in a particular environment without exhibiting any failures 4. Many methods for estimating the reliability are available. We mention only some works: 4 and 5. However the failure intensity, or failures per unit time, of a software-based system, depends on how the system is used. The usage is characterised by the operational profile, the set of operations available on the system and their associated probability of occurrence. Reliability growth models assume that during testing the program is executed several times using test cases that are selected randomly according to the operational profile of the program under evaluation. The collected test data can, for instance, be information on: test identification, effective execution time, set-up time, total test time, result (passed or severity of failure: critical, severe, major or minor), whether or not the cause of a failure has been removed, where the faults corresponding to the failures were found and corrected. More considerations about program testing and analysis can be found in 1, 2 and 4. From practical point of view, this testing step is one of the following interrelated software reliability sub-processes in a typical software engineering life cycle: document construction, integration, inspection and correction; code construction, integration, inspection and correction; test preparation and testing; fault identification and fault repair; validation of repairs and re-testing. A method based on testing a finished program require the knowledge of program’s operational distribution and a very large number of tests for modest estimates. Another approach in estimating the software quality is based on the concept of trustability introduced by Howden in 3. This paper address the problem of trustability assessment and its application to software quality. 2. Terminology and notations Let be the software program space and M be the set of the software evaluation procedures. Assume that the size of M is n. These sets are non-empty. Let be m and d(m) [0,1]. We call d(m) the detectability of the method m if d(m) is the probability that m will detect a fault in a selected program p , if the program p contains a fault. In the testing process the set of all possible faults are partitioned in fault classes depending on a criterion (as an example failure severity classes). Two types of detectability methods are available: deterministic and probabilistic methods. Deterministic methods have detectability “1” for well-defined classes of faults. A probabilistic method has a probability less than “1” for detecting faults which depends on the sampling process that is assumed to occur when a program is constructed or tested. If the set of faults F is partitioned in k classes denoted by Fj and let fj the occurrence frequency of the faults in class j (1 j k). Let be dj(m) the detectability of method m for fault class Fj and D be the detectability matrix, having n rows and k columns: one row for every method. In this case: D = (dij), i = 1, 2, ..., n; j = 1, 2, ..., k; dij = dj(mi), where mi M for i = 1, 2, ..., n. In the testing and analysis process a strategy is necessary. Such strategy can be given by a vector (si), i=1, 2, ..., n, where si is a binary indicator of the methods mi (si = 1 when the method mi is applied, si = 0 otherwise). Define the trustability of a computer program p by the number T(p) having the following property: the hypothesis that p is free of faults is accepted with confidence q with q at least T(p). Using the above assumptions and notations, following 3, if p is a computer program under testing and analysing and S is a strategy addressed to p with no faults detected, than p has trustability at least T(p), where T(p) = 1 - max {min {fj(1-dij)si | i = 1, 2, ..., n} | j = 1, 2, ..., k}, and si is the usage factor of the method mi (the components of the strategy S). The above formula is called the trustability formula. Sometimes, a testing and analysing procedure depends on some parameters (one or more) integrating in the vector . Let i be the parameter vector associated to the method mi and ci(p,i) the cost of applying the method mi for a program p. Then the cost of applying a strategy S to the program p is given by: n C(S, p) = s i 1 i ci ( i , p). An important problem is to determine a strategy for optimal trustability and cost. Let p a fixed computer program. If T(p) is given by some trustability guarantee formula and is some desired level of trustability, an optimal cost strategy for supporting T(p) at level (T(p) ) is a minimum cost strategy with the above attributes. A solution of this optimisation problem is based on the classical optimisation methods under a set of constraints. The computational procedures cover a wide spectrum, from numerical optimisation to backtracking. 3. Trustability assessment Let p be a selected computer program and assume that a single method m with detectability d(m) is available. Then either the program has faults or it does not. When the program has faults, the risk factor is 1d(m) otherwise is 0. If no faults are detected applying the method m, the risk of a false conclusion that p is free of faults is less than 1-d(m) so that the trustability of p is at least T(p) where T(p) = 1- (1- d(m)). Given a method m, how estimate d(m)? Different approaches are available in the development step or in the test process of the final product. The testing process depends on the computer program complexity (the number of executable statements, the total number of operators, the total number of operands (including class variables), the number of calls to the functions (including sending messages in object oriented field), the number of distinct operators, the number of distinct operands, the number of distinct functions, the McCabe’s cyclomatic number etc.). The application of the testing methods depends on the operational profile already developed. However The detectability of a method results after preparing test cases and applying the test procedure. For example, methods for detecting uninitialised variables have detectability 1 for uninitialised variable faults. If repeated code readings method is used is important to estimate the probability of a single code reading finding at least one fault if the program contains a fault. Given a program under development and finally under testing and the set of analysing methods, the detectability matrix is computed. We observe that the detectability is an attribute of the method, but the methods for testing and analysing are developed for a particular computer program. If general methods for testing and analysing a used, these will depend on the characteristics, denoted above by , that will be determined for the computer program under analysis. Due to the modifications in the software project for a new release is clear that a new delectability matrix is obtained in the last case. However some methods will have the same delectability. But the determination of strategies for optimal trustability and cost will start from the beginning. The best approach consists in developing specialised methods for classes of faults: a method can detect only faults belonging to the corresponding class. More than one method can exist for a class of faults (in the requirements step of the software development, at least two detection methods can be used: data flow and object oriented analysis). For estimating the optimal trustability the existence of a detectability matrix is necessary. The minimum cost strategy for achieving a trustability bound T(p) for a program p consists in obtaining the strategy components si introduced in previous section. 4. Trustability and quality It is well known that software testing is the fundamental basis of all approaches to software quality questions. Software test methods represents the formalisation and application of some experimental processes in order to collect data for concluding about both software quality and reliability. The art of software testing is well developed a large variety of methods being available. However information obtained in the development process (based on static software metrics) and the results of the many types of test: branch coverage, call-pair coverage, path coverage, predicate coverage, class-hierarchy coverage, data flow coverage, mutation analysis, object oriented testing, process assessment, algorithm assessment etc.) are useful for improving the reliability of the software. Estimating the detectability for such methods are the first step in computing the trustability. If these methods has constant detectability (this is true only for a small class of software) then applying the optimal cost strategy a good estimation of the trustability is obtained. The quality of a software program will be at least the trustability of this software. This encourages the efforts to looking for detectability factors and more trustability models. 5. Conclusions In this paper the trustability of a computer program is considered to be an important indicator for software reliability and quality. The usage of a large variety of testing and analysing methods, and estimating the quality of a method (by the detectability indicator) in order to find a strategy for optimal trustability but with minimum cost is a practical solution for improving the reliability and finally, the quality of the program. References 1. W. E. Howden, Functional Program Testing and Analysis, McGrawHill, February, 1987. 2. W. E. Howden, Systematic informal software testing and analysis methods, Proceedings, Seventh International Software Quality Week, SRI, June, 1994. 3. W. E. Howden, Y. Huang, Software Trustability, Fifth International Symposium on Software Reliability Engineering, Monterey, California, November 1994. 4. J. D. Musa, Software reliability engineering, McGraw-Hill, 1999. 5. Denise M. Woit, Estimating software reliability with hypothesis, CRL, McMaster University, April, 1993.