An Application of Simulation Case Study By Vojo Bubevski

advertisement
An Application of Simulation
In Software Reliability Prediction
Case Study
By Vojo Bubevski
TCS Confidential
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Abstract The quantitative approach to Software Quality Management is a standard requirement for all software development projects compliant with Capability Maturity Model ­ CMM™ Level 4. Software Reliability is also one of the main aspects of Software Quality. Thus, achieving the software reliability goals is a major objective for software development organisations as it is a critical constraint on their projects. For example, the software will not be released to the customer for operation until the reliability goals have been achieved. Software Reliability is the probability of failure­free software operation for a specified time period. Predicting the software system reliability at some point in the future based on data already available, is one of the important challenges of software projects. The implicit objective of management is to achieve the software system reliability goals with minimal costs and schedule of projects. Therefore, prediction in this sense is very useful in supporting software project management to achieve this objective. This paper presents an approach to applying simulation in software reliability prediction using Palisade™ @RISK®. The purpose of the paper is to demonstrate the practical aspects of software reliability simulation, so the theory is referenced only. A proof of the concept is established by experimenting on the reliability simulation of a real system. A unique data transformation method is elaborated and applied for this purpose. The method transforms the raw unusable failure­count data into data usable for simulation, without affecting the reliability principles. The objective of the initial experiments is to select the most suitable model for this specific system. The selected simulation model then is used for reliability prediction of the real system. The simulation results, compared with the actual data, are satisfactory, thus proving the concept. Also, a prediction of reliability of a hypothetical financial software system is elaborated. The simulation experiment is to use the data of the supposed current release, in order to predict the reliability of the supposed next release. Important feasibility assumptions are discussed. Comparing the experiment results with the supposed actual data of the next release, the results are satisfactory. This model is simple but could be upgraded for complex simulations. In addition, some important future work recommendations are provided for supporting the software projects in achieving reliability goals with minimal costs and schedule, i.e. to develop optimization models for this purpose.
Page 2 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Introduction Software reliability is defined as the probability of failure­free software operation for a specified period of time (American National Standards Institute – ANSI). It quantifies the failures of software systems and is the key factor in software quality [1]. It is also a major subject of Software Reliability Engineering (SRE) – a discipline which quantitatively studies the operational behavior of software systems with respect to reliability requirements of the user. The quantitative study of software systems concerning reliability involves software reliability measurement. Measurement of software reliability includes two activities, i.e. software reliability estimation and software reliability prediction. Software reliability estimation determines current software reliability based on the failure data obtained in the past. Its main purpose is to assess the current reliability of the software system. Software reliability prediction however, determines future reliability of software systems based upon software metrics data available now. The software code size is measured by source Lines of Code (LOC), and KLOC is one thousand LOC. The term defect is generically used in the paper referring to either, fault (i.e. the cause of a failure), or failure (i.e. the effect of a fault) [1]. Cumulative Failure Function is defined as the mean cumulative failures associated with each point of time [1]. Failure Intensity Function represents the rate of change of the cumulative failure function [1]. Mean Time to Failure (MTTF) is defined as the expected time that the next failure will occur [1]. Classical approach to software reliability prediction is based on analytic models using statistical analysis of the past failure data in order to predict future reliability. These models have been available in literature since the early 1970s. The major software reliability analytic models are very well reviewed by Lyu [1].The main characteristic of the analytic models is that unrealistic and oversimplified assumptions are required to obtain a simple analytic solution [1, 2]. The need of modern approach to software reliability was recognized in 1993 by Von Mayrhauser et al. [2] “Software reliability engineering must develop beyond statistical analysis of data and analytic models which frequently require unrealistic assumptions. We must develop a viable discipline of simulation to aid experimental and industrial application of software reliability engineering.” It seems that with this pioneering work, the application of simulation in software reliability engineering was initiated. Since 1993, the application of simulation in software reliability engineering has emerged and substantial work has been published. Some examples are the articles by Tausworthe, Lyu, Gokhale, and Trivedi [3, 4, 5, 6]. It should be highlighted that results from these works indicated that “the simulation technique potentially may lead to more accurate tracking and more timely prediction of software reliability than obtainable from analytic modeling techniques” [3]. Also, the simulation models appeared to be subject to only a few fundamental assumptions, such as the independence of the causes of failures [1, 7].
Page 3 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ The software reliability models are classified by the type of distribution of failures that occurred by time t [1]. The most important types of models within this classification are the Poisson and Binomial models [1]. A very interesting piece of work on software reliability simulation was published by Tausworthe and Lyu [7] as a chapter in a handbook of software reliability engineering [1]. This work elaborates the application of simulation techniques to typical software reliability processes eliminating the simplifying assumptions needed for analytic models [7]. Special­purpose simulation tools for software reliability were designed, built and used in simulation experiments on a real­world project, e.g. Galileo project at the Jet Propulsion Laboratory [1]. The simulation results were very close to the real system’s data. Also, the simulation results were compared with the prediction results obtained from analytic models, which demonstrated that the analytic models do not seem to adequately predict the reliability of the system [7]. In contrast, this paper presents an application of a general­purpose simulation tool – Palisade™ @RISK® in software reliability prediction. Monte Carlo simulation is used with the Poisson distribution – a very important distribution used in practice [1]. The purpose of the paper is to demonstrate the practical aspect of software reliability simulation. Therefore, the theory of software reliability simulation is not discussed. The reader should refer to the cited references for theory discussions [1, 7].
Firstly, a proof of the concept is demonstrated by experimenting on the reliability simulation of a real system. The published data of the Galileo project at the Jet Propulsion Laboratory [1] is used for this purpose (i.e. the same data that was used by Tausworthe and Lyu in their work [7]). A unique data transformation method is elaborated and applied in practice. The method transforms the raw unusable failure­ count data into data usable for simulation, without affecting the reliability principles. This method is used to transform the Galileo raw failure­count data and simulate the reliability of the testing. The results of the simulations are compared with the actual data and discussed. It should be emphasised that the experimental results are satisfactory. Therefore, the concept is proven. Secondly, the prediction of reliability of a hypothetical financial software system (Project TRPC) is elaborated. Two sets of data are available for this system, i.e. the data from two subsequent releases of the system – Release (i) and Release (i+1). The experiment is to use the data of Release (i) (e.g. the current release), in order to predict the reliability of Release (i+1) (e.g. the next release). Important feasibility assumptions relating to this specific experiment are discussed. The experiment results are also discussed. After comparing with the actual data of Release (i+1), the results are satisfactory. Thus, the experiment is successful. This simulation experiment is relatively simple including only the testing and operation failure data. The model could be expanded to consider the analysis and design failures if data is available. It should be noted that this simulation model is for illustration purposes.
Page 4 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Finally, recommendations for future work are given to provide for achieving the software project reliability goals with minimal costs and schedule. For example, it will be very useful to develop specific optimization models for this purpose. In conclusion, the presented approach to software reliability prediction, including the unique method for transforming the raw failure­count data to be usable for simulation, is generic and applicable to any software project compliant with CMM™ Level 4. The experiments have demonstrated that Palisade™ @RISK® (a general purpose simulation tool) can be used to predict the software reliability. The obtained experimental results are satisfactory and acceptable. Using Palisade™ @RISK® is much easier than using the special­purpose software reliability tools. Also, Palisade™ @RISK® tools provide for comprehensive data presentation and analysis of the simulation results, which is not the case with the special­purpose tools. The demonstrated simulation models are simple. However, the models could be easily upgraded to provide for more complex reliability prediction if data is available.
Page 5 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Proof of Concept In order to prove the concept of this paper, we experiment with real system data to simulate software reliability using the Palisade™ @RISK® tools. For this purpose, the data of the Galileo project at the Jet Propulsion Laboratory [1] is used. The following outlines how to prove the concept. 1. 2. 3. 4. Present and analyse the actual (raw) Galileo data; Transform the Galileo data for simulation, as the raw data is unusable; Simulate the reliability using two different simulation models; Compare the two simulations results with the actual data in order to select the better simulation model for future Galileo simulations; 5. Use the selected model to show how we can predict the reliability at the end of testing, supposing that we are in the middle of the testing stage. The Proof­of­Concept approach is described in the following sections.
Galileo Actual Data
The Galileo failure­count data is collected during the testing period of 41 weeks. The data is given in Appendices, Table 1. Figure 1 shows the Galileo project actual failure intensity function. Galileo Actual Data 35 F ailures p er W eek 30 25 20 15 10 5 0 ­5 0 10 20 30 Calendar Week Figure 1: Galileo project actual failure intensity function
Page 6 of 32
October 2008
40 50 Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ The total number of defects detected and removed during 41 weeks of testing is 351. The reliability is measured by Mean Time to Failure (MTTF) which is calculated assuming uniform distribution of the defects during a specific time period. Thus, we simply use calendar time in weeks to calculate the Mean Time to Failure (MTTF) for Galileo project in testing as follows: MTTF = 41/351 = 0.1168 Weeks. A simple analysis of the data shown in Figure 1 is as follows. The numbers of failures detected in each time interval are independent Poisson random variables [1], so it is impossible to correlate the number of failures detected each week. The failure intensity function exhibits a strong­zigzag decreasing behavior. Consequently, the data is raw and the failure intensity function is not practical for simulation. However, we can transform the raw data to be usable for simulation without changing the reliability principal values (i.e. the number of failures detected in each time interval, the time period and the total number of failures detected). The following section explains the transformation of the raw data in order to be usable for simulation.
Method to Transform Raw Data for Simulation
The method to transform the data without changing the reliability principals is as follows. Considering the fact that the numbers of failures detected in each time interval are independent Poisson random variables [1], we can reorder the time intervals preserving: a) the number of failures detected in each interval; b) the time period; and c) the total number of failures detected during the time period. The criterion for reordering the intervals is that the number of failures detected in each interval must be put in descending order. This will transform the failure intensity function from a strong­zigzag decreasing type to a smooth decreasing type, which is usable for simulation. For example, the Galileo reliability measure i.e. MTTF is not changed by sorting the data in descending order, as we have not changed the numbers of failures detected in each week, thus preserving the 41 weeks period and the total number of 351 defects. That is, the MTTF of the raw Galileo data is equal to the MTTF of sorted Galileo data (i.e. MTTF = 41/351 = 0.1168). The sorted data, as well as the raw data, is presented in Appendices, Table 1. The intensity function of the sorted data is shown in Figure 2.
Page 7 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Galileo Sorted Data 35 F ailures p er W eek 30 25 20 15 10 5 0 ­5 0 10 20 30 40 50 Time ­ Week Figure 2: Galileo sorted failure intensity function The sorted failure intensity function presented in Figure 2 is now usable for software reliability simulation of the Galileo project testing during 41 weeks period. This is possible because the reliability measure MTTF has not changed.
Galileo Reliability Simulation 1: Exponential Failure Intensity
This simulation model uses the Poisson distribution with an exponential failure intensity function. The exponential approximation of the Galileo failure intensity function is presented in Figure 3. Exponential Failure Intensity Function y = 34.74e ­0.0884x R 2 = 0.9295 Failures per Week 35 30 25 20 15 2. Failures per Week 10 5 Expon. (2. Failures per Week) 0 0 10 20 30 40 Time Week Figure 3: Exponential approximation of Galileo failure intensity function
Page 8 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ The approximation of the failure intensity y (i.e. failures per week) as a function of time x (i.e. week) is y = 34.74*EXP(­0.0884*x). Therefore, we simulate the reliability using the Poisson distribution. The mean of the Poisson distribution is equal to the value of the failure intensity function for time t. The simulation results and statistics are shown in Appendices, Table 2 & Table 3. The simulation distribution is shown in Figure 4. Figure 4: Distribution of the Galileo simulation with exponential failure intensity From the results, we can see that the predicted total number of defects is 361, which is quite close to the actual of 351, with Standard Deviation 19 (i.e. 5.3%).
Galileo Reliability Simulation 2: Logarithmic Failure Intensity
We use the Poisson distribution with a logarithmic failure intensity function in this simulation. The logarithmic failure intensity function is shown in Figure 5. Logarithmic Failure Intensity Function y = ­8.6744Ln(x) + 32.687 R 2 = 0.9819 Failures per Week 35 30 25 2. Failures per Week 20 15 Log. (2. Failures per Week) 10 5 0 ­5 0 10 20 30 40 50 Time Week Figure 5: Logarithmic approximation of Galileo failure intensity function
Page 9 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ 420
400 380 360 340 320 300 280 The approximation of the failure intensity y (i.e. failures per week) as a function of time x (i.e. week) is y = ­8.6744Ln(x) + 32.687. Again, we simulate the reliability using Poisson distribution. The mean of the Poisson distribution is equal to the value of the failure intensity function for time t. The simulation results and statistics are given in Appendices, Table 4 and Table 5. The simulation distribution is shown in Figure 6. Figure 6: Distribution of the Galileo simulation with logarithmic failure intensity For this simulation, the predicted total number of defects is 352 with Standard Deviation 19 (i.e. 5.4%). This result is almost equal to the actual of 351.
Comparing Results and Selecting Simulation Model for Galileo
The simulation results are as follows: 1. Simulation 1: Predicted total number of defects is 361 with Standard Deviation 19 (i.e. 5.3%). 2. Simulation 2: Predicted total number of defects is 352 with Standard Deviation 19 (i.e. 5.4%). Comparing the actual number of errors, i.e. 351, with the results of Simulation 1 and Simulation 2, it is obvious that the Simulation 2 result is much better. Thus, the Simulation 2 model with the logarithmic failure intensity function is selected for future simulations of Galileo.
Simulation 3: Predicting 41 Weeks Reliability after 21 Weeks
Supposing that we are at the end of week 21, we can predict the reliability until the end of testing stage, i.e. 41 weeks. This means that we only have data available for the first 21 weeks. Thus, we use Poisson distribution with logarithmic failure intensity function having 21 weeks of data available. The total number of defects Page 10 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ for 21 weeks is 272. Thus, we can calculate the reliability measure as MTTF = 21/272 = 0.0772 Weeks. The actual data from the first 21 weeks of testing is shown in Figure 7. 21 Weeks Actual Data Failures per Week 35 30 25 20 15 10 5 0 0 5 10 15 20 25 Calendar Week Figure 7: Actual Galileo data from 21 weeks testing We now need to prepare the raw data for simulation by sorting it in descending order. Please, note that by doing this sort, we are not changing the reliability measure, i.e. MTTF = 21/272 = 0.0772 Weeks. After sorting the data, the logarithmic failure intensity function is shown in Figure 8. Logarithmic Failures per Wee k y = ­8.8835Ln(x) + 32.149 Failures per Week 35 R 2 = 0.9653 30 25 2. Failures per Week 20 15 Log. (2. Failures per Week) 10 5 0 0 5 10 15 20 25 Time Week Figure 8: Logarithmic failure intensity function for 21 weeks The approximation of the failure intensity y (i.e. failures per week) as a function of time x (i.e. week) is y = ­8.8835Ln(x) + 32.149. Again, we simulate the reliability
Page 11 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ 380
360 340 320 300 280 260 using the Poisson distribution. The mean of the Poisson distribution is equal to the value of the failure intensity function for time t. The simulation results and statistics are given in Appendices, Table 6 and Table 7. The simulation distribution is shown in Figure 9. Figure 9: Distribution of the Galileo simulation based on 21 weeks data For this simulation, the predicted total number of defects is 310 with Standard Deviation 17 (i.e. 5.5%). This result is not very good compared with the actual result of 351 defects, but that is the best we can get. The prediction would have been better if we have more data available. We will show this in the next simulation.
Simulation 4: Predicting 41 Weeks Reliability after 23 Weeks
Supposing that we are now at the end of week 23, we can predict the reliability until the end of the testing stage, i.e. 41 weeks. This means that we only have data available for the first 23 weeks. Thus, we use Poisson distribution with logarithmic failure intensity function having 23 weeks of data available. The total number of defects for 23 weeks is 301. Thus, we can calculate the reliability measure as MTTF = 23/301 = 0.0764 Weeks. The actual data from the first 23 weeks of testing is shown in Figure 10. Page 12 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Galileo Data 23 Weeks Failure per Week 35 30 25 20 15 10 5 0 0 5 10 15 20 25 Time Wee k Figure 10: Actual Galileo data from 23 weeks testing We now need to prepare the raw data for simulation by sorting it in descending order. Please, note that by doing this sort, we are not changing the reliability measure, i.e. MTTF = 23/301 = 0.0764 Weeks. After sorting the data, the logarithmic failure intensity function is shown in Figure 11. Failures per Week Logarithmic Failure Intensity Function y = ­8.5579Ln(x) + 32.289 R 2 = 0.9672 35 30 25 20 15 10 2. Failures per Week Log. (2. Failures per Week) 5 0 0 5 10 15 20 25 Time Week Figure 11: Logarithmic failure intensity function for 21 weeks The approximation of the failure intensity y (i.e. failures per week) as a function of time x (i.e. week) is y = ­8.5579Ln(x) + 32.289. Again, we simulate the reliability using the Poisson distribution. The mean of the Poisson distribution is equal to the value of the failure intensity function for time t. The simulation results and statistics are given in Appendices, Table 8 and Table 9. The simulation distribution is shown in Figure 12.
Page 13 of 32
October 2008
420
400 380 360 340 320 300 280 260 Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Figure 12: Distribution of the Galileo simulation based on 23 weeks data For this simulation, the predicted total number of defects is 346 with Standard Deviation 19 (i.e. 5.49 %). This result is very good compared with the actual of 351 defects. Thus, we have shown that the prediction with 23 weeks data proves the concept.
Proof of Concept Summary
The actual Galileo failure­count data is presented and analysed. The analysis shows that the data is raw and the failure intensity function exhibits a strong­zigzag decreasing behavior, so it is unusable for simulation. However, the raw data can be transformed to provide for simulation without changing the reliability principals. The method to transform the raw data is to reorder the time intervals in descending order of number of failures, preserving: a) the number of failures detected in each interval; b) the time period; and c) the total number of failures detected. This method is feasible because the numbers of failures detected in each time interval are independent Poisson random variables [1]. The method transforms the failure intensity function from a strong­zigzag decreasing type to a smooth decreasing type, which is usable for simulation. The reliability is simulated using the Poisson distribution with two different approximations to the failure functions: a) Exponential; and b) Logarithmic. Comparing the results of the two simulations with the actual data shows that the Logarithmic model is the better simulation model for Galileo. This model is used in two predictions of the reliability at the end of testing, supposing that we are in the middle of the testing stage. The 41 weeks testing reliability predictions for Galileo are as follows. Page 14 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ The first prediction is at the end of week 21, so the simulation is based on 21 weeks data. The predicted total number of defects is 310 with Standard Deviation 17 (i.e. 5.5%). The second simulation is based on 23 weeks data as the prediction is at the end of week 23. The predicted total number of defects is 346 with Standard Deviation 19 (i.e. 5.49 %). Compared with the actual total of 351 defects, the first result of 310 defects is not bad (i.e. ­11.68% error) whereas the second result of 346 defects is very good (i.e. ­ 1.42% error). The second prediction is better than the first prediction, simply because the simulation was carried out on data taken over a longer time period, i.e. the simulation is based on more available data. In conclusion, the experimental results are satisfactory. Therefore the concept is proven.
Page 15 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ TRPC Next Release Simulation – Hypothetical Experiment In this experiment of reliability simulation, we use the current release data to simulate the reliability of the next release of a hypothetical software system. It is supposed that we have collected data of two subsequent releases of the hypothetical financial software system – Project TRPC. It should be noted that this simulation experiment is hypothetical, so it is for illustration purposes only. The following is an outline of this chapter. 1. Feasibility assumptions are discussed; 2. TRPC Project two releases data are presented; 3. Simple Simulation model is demonstrated; 4. Simulation results and actual data are compared. Even though this experiment is hypothetical, it illustrates how we can simulate the next release of a software system, using the data of the current release.
Feasibility Assumptions for Software Reliability Simulation
The software reliability simulation uses failure data collected in the past. However, having only the collected data is not sufficient to provide for feasibility of the software reliability simulation. Other much more complex criteria must be met, in addition to the data, to provide for feasible software reliability simulation. For example, if an organisation collects and has a history of data for their software projects, but does not meet the other criteria, the software reliability simulation is not feasible. The data provide for running a simulation, but any reliability prediction initiative is unrealistic and even dangerous because the simulation results are inconsistent and any decision made based on these results is very risky. The following defines the fundamental assumptions for the feasibility of the software reliability simulation. That is, Software Reliability Simulation / Prediction are feasible only if the software organisation and the software project are compliant with Capability Maturity Model – CMM™ Level 4. CMM™ Level 4 requires quantitative management of software processes and products within an organisation. The criteria are as follows: “Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.” [8]. Some aspects of software reliability prediction relating to CMM™ LEVEL 4 are discussed by Lakey [9].
Page 16 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ TRPC Project Data
We suppose that we have collected data for two subsequent releases of the TRPC Project, i.e. “Current TRPC Release” and “Next TRPC Release” Project data. The “Current TRPC Release” Project data is presented in Appendices, Table 10. The “Next TRPC Release” Project data is given in Appendices, Table 11. We will use the “Current TRPC Release” Project data to simulate the “Next TRPC Release” Project reliability.
TRPC Project Next Release Simulation
In our simulation experiment, we will use the “Current TRPC Release” Project data to simulate the “Next TRPC Release” Project reliability (i.e. all parameters of the simulation model are calculated from the current release). The simulation model is as follows. 1. We expect that the size of the new code will be around 40 KLOC (i.e. 40,000 source lines of code). Thus, we calculate the size of the new code using the Normal distribution with a mean value equal to 40 and Standard Deviation of 2 (5%). 1. New Code Size Prediction (Normal Distribution) New Code Predicted Size KLOC: Mean Value (µ): Standard Deviation (σ): 40.00 40.00 2.00 2. We expect that the size of the changed code will be around 25 KLOC. We use the Normal distribution with a mean value equal to 25 and Standard Deviation of 1.2 (5%) to calculate the changed code size. 2. Changed Code Size Prediction (Normal Distribution) Changed Predicted Size KLOC: Mean Value (µ): Standard Deviation (σ): 25.00 25.00 1.25 3. We assume that Defect Injection Rate (DIR) for the new code will be equal to the current release rate, i.e. 47.69 Defects/KLOC. To calculate this parameter we use the Normal distribution with a mean value equal to 47.69 and 5% Standard Deviation (i.e. 2.38). 3. New Code Defect Injection Rate (DIR) Prediction Based on Current Release Data (Normal Distribution) Current Release DIR per KLOC: New Code Predicted DIR per KLOC: Mean Value (µ): Standard Deviation (σ): Page 17 of 32
October 2008
47.69 47.69 47.69 2.38
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ 4. For the changed code, the Defect Injection Rate (DIR) will be equal to the current release rate, i.e. 29.44 Defects/KLOC. To calculate this parameter we use the Normal distribution with a mean value equal to 29.44 and Standard Deviation of 1.47 (5%). 4. Changed Code Defect Injection Rate (DIR) Prediction Based on Current Release Data (Normal Distribution) Current Release DIR per KLOC: Changed Code Predicted DIR per KLOC: Mean Value (µ): Standard Deviation (σ): 29.44 29.44 29.44 1.47 5. We assume that for the new code the effort required to test and fix defects during testing is equal to the current release effort, i.e. 64.14 Man­Days / KLOC. To calculate this parameter we use the Normal distribution with a mean value equal to 64.14 and Standard Deviation of 3.21 (5%). 5. Effort for Testing & Fixing New Code Prediction Based on Current Release Data (Normal Distribution) Current Release Man­Days per KLOC: Test & Fix Predicted Man­Days per KLOC: Mean Value (µ): Standard Deviation (σ): 64.14 64.14 64.14 3.21 6. Similarly, for the changed code, we assume that the effort required to test and fix defects during testing is equal to the current release effort, i.e. 59.69 Man­ Days / KLOC. To calculate this parameter we use the Normal distribution with a mean value equal to 59.69 and Standard Deviation of 2.98 (5%). 6. Effort for Testing & Fixing Changed Code Prediction Based on Current Release Data (Normal Distribution) Current Release Man­Days per KLOC: Test & Fix Predicted Man­Days per KLOC: Mean Value (µ): Standard Deviation (σ): 59.69 59.69 59.69 2.98 7. We expect that the new code Defect Removal Rate (DRR) will be the same as the current rate, i.e. 1.79 Man­Days / Defect. Thus, we calculate this parameter using the Normal distribution with a mean value equal to 1.79 and Standard Deviation of 0.09 (5%). 7. New Code Defect Removal Rate (DRR) Prediction Based on Current Release Data (Normal Distribution) Current Rel. DRR Man­Days per Defect: New Code Predicted DRR Man­Days per Defect: Mean Value (µ): Standard Deviation (σ): Page 18 of 32
October 2008
1.79 1.79 1.79 0.09
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ 8. Similarly, the changed code Defect Removal Rate (DRR) will be the same as the current rate, i.e. 2.69 Man­Days / Defect. Thus, we calculate this parameter using the Normal distribution with a mean value equal to 2.69 and Standard Deviation of 0.13 (5%). 8. Changed Code Defect Removal Rate (DRR) Prediction Based on Current Release Data (Normal Distribution) Current Release DRR Man­Days per Defect: Chang. C. Predicted DRR Man­Days per Defect: Mean Value (µ): Standard Deviation (σ): 2.69 2.69 2.69 0.13 9. Using the parameters above, we calculate the Defect Injection and Defect Removal Intensities for new and changed code, as given below. We simulate now the numbers of injected and removed defects (shown in Blue below) using the Poisson distribution with the mean value equal to the associated defect intensity. The total number of defects in operation (shown in bold Blue below) is the difference between injected and removed defects. 9. Next Release Defect Totals Prediction (Poisson Distribution) New Code Defect Injection Intensity: Chang. Code Defect Injection Intensity: New Code Defect Removal Intensity: Chang. Code Defect Removal Intensity: Predicted New Code Defects Injected: Predicted Chang. Code Defects Injected: Predicted New Code Defects Removed: Predicted Chang. C. Defects Removed: Predicted Total Defects in Operation: The simulation distribution is given in Figure 13.
Page 19 of 32
October 2008
1902.60 735.94 1435.90 554.72 1908.00 736.00 1436.00 555.00 648.00 1.2
1.0 0.8 0.6 0.4 0.2 0.0 Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Figure 13: Distribution of the TRPC Project simulation For this simulation, the predicted total number of defects is 648 with Standard Deviation 164 (i.e. 25.31%). The high Standard Deviation of 25.31% is caused by the high number of random variables used in the model.
TRPC Project Simulation Results Vs Actual Data
The comparison of the simulation results with the actual hypothetical data is given below. Simulation Results Vs Actual Data New Code Size KLOC: Changed Code Size KLOC: Total Code Size KLOC: Total Number of Defects in Operation Actual Simulation Error % 38 40 5.27 26 25 ­3.85 64 65 1.56 690 648 ­6.09 The results are quite good (­6.09% error for total number of defects), so our hypothetical experiment is successful.
TRPC Next Release Simulation Summary
This simulation experiment is hypothetical; however it demonstrates how we can simulate the next release of a software project, using the data of the current release. The simulation model is simple considering only the testing and operational phase of the software project. The experiment is for illustration purposes only. The model can be easily expanded to involve the analysis and design phase of the project if the failure data is available. Page 20 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ The assumptions are discussed first in order to establish the criteria for which the reliability simulation is feasible. Then, the TRPC Project data of the “current release” and “next release” are presented. Also, the simulation model is demonstrated, which predicts the reliability of the “next release” using the “current release” data. Finally, the simulation results are compared with the actual “next release” confirming that the experiment is successful. Recommendations for Future Work The management objective is to achieve the software system reliability goals with minimal costs and schedule of projects. Software reliability simulation is very useful in supporting software project management to achieve this objective. Thus, as a future work, it is recommended to develop optimization models for this purpose. For example, supposing that the management want to employ extra resources on the project, but the resources are limited. An appropriate optimization model can provide for an optimal solution to this problem, i.e. to maximize the reliability improvement by optimal utilization of the limited resources.
Page 21 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Conclusion The paper presents experiments of software reliability simulation using Palisade™ @RISK® (a general­purpose simulation tool). The simulation models use Monte Carlo sampling with the Poisson distribution. The purpose of the paper is to demonstrate the practical aspect of simulation, so the theory is not discussed. A proof of concept is demonstrated with simulations of a real system, i.e. the Galileo project at the Jet Propulsion Laboratory [1]. A unique method is elaborated and applied in practice to transform raw unusable failure­count data into data usable for simulation. The method does not affect the software reliability principles. It should be emphasised that the method is generic and applicable to any software reliability simulation. The Galileo project testing is simulated in the experiments and the results are compared with the actual data. The experimental results are satisfactory, hence proving the concept In addition, a simulation of a hypothetical system (Project TRPC) is elaborated. The purpose of the experiment is to use the data of the current release in order to simulate the next release. Important feasibility assumptions are discussed. The simulation model is presented and the experiment results are compared confirming that the experiment is successful. The simulation model is relatively simple including only the testing and operation phases. The model can be expanded to consider the analysis and design failures if data is available. It should be noted that this simulation model is for illustration purposes only as hypothetical data is used. As a future work, it is recommended to develop optimization models in order to support the management in achieving the software system reliability goals with minimal costs. The following are the major conclusions to emphasise: Firstly, the presented approach to software reliability prediction is generic and applicable to any CMM™ Level 4 software project. Secondly, the method for transforming the unusable raw failure­count data into data usable for simulation is unique and generic. Thirdly, the general purpose simulation tools such as Palisade™ @RISK® can be used for software reliability simulation. The experimental results presented in this paper are satisfactory. Using Palisade™ @RISK® is much easier than using the special­purpose tools. Also, Palisade™ @RISK® tools provide for comprehensive data presentation and analysis, which is not the case with the special­purpose tools. Finally, the presented simulation models are simple. However, the models can be upgraded to provide for more complex reliability simulation if data is available.
Page 22 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Appendices Projects data discussed in this paper are presented in this section. Also, the simulation results and statistics are shown. Some rows and columns are hidden for practical purposes.
Page 23 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Galileo CDS Raw Data Galileo CDS Sorted Data Data Format: Failure­count data; Time Unit: Weeks Calendar Week No of Failures per week 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 Total Time ­ Week 4 12 15 9 28 29 8 7 4 8 9 12 8 4 14 19 23 12 22 12 13 19 10 5 5 5 7 7 1 3 1 2 0 2 9 1 0 0 0 1 1 351 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 Total Table 1: Raw and sorted Galileo failure­count data
Page 24 of 32
October 2008
Failures per Week 29 28 23 22 19 19 15 14 13 12 12 12 12 10 9 9 9 8 8 8 7 7 7 5 5 5 4 4 4 3 2 2 1 1 1 1 1 0 0 0 0 351 Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Galileo CDS Monte Carlo Simulation with Exponential Failure Intensity Function ­ 41 Weeks Predicted System's Data (41 Weeks) Data Format: Failure­count data (time unit in weeks) Column1: Time Interval Column2: Failure Intensity y = 34.74*EXP(­0.0884*x) Column3: Predicted Failures/Week using Poisson Distribution with mean equal to the Failure Intensity value at time x. 1. Time Week 1 2 3 4 5 6 34 35 36 37 38 39 40 41 Total: 2. Failure Intensity 31.80080999 29.11029119 26.64740469 24.39289157 22.32912234 20.43995903 1.719944049 1.574427573 1.441222571 1.319287424 0 0 0 0 3. Predicted Failures/Week 32 29 27 24 22 20 2 2 1 1 0 0 0 0 361 2. Actual Failures/Week 29 28 23 22 19 19 1 1 1 1 0 0 0 0 351 Table 2: Simulation 1 Results @RISK Detailed Statistics MCC2 Performed By: BUBEVSKI Date: 06 August 2008 20:41:25 Total: / 3. Predicted Name Failures/Week Description Cell 1 / 3. Predicted Failures/Week 37 / 3. Predicted Failures/Week Output RiskPoisson($B$12) RiskPoisson($B$48) MC EXP Simulation !C53 MC Simulation C2!C12 MC Simulation C2!C48 Minimum 306 17 0 Maximum 427 50 6 361.795 31.725 1.362 18.89736 5.729765 1.145543 Mean Std Deviation Variance 357.1101 32.8302 1.312268 Skewness ­0.02712095 0.1931779 0.8212588 Kurtosis 3.164034 2.922069 3.572076 Target #1 (Value) Target #1 (Perc%) 351 0.285 Table 3: Simulation 1 Statistics
Page 25 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Galileo CDS Monte Carlo Simulation with Logarithmic Failure Intensity Function – 41 Weeks Predicted System's Data (41 Weeks) Data Format: Failure­count data (time unit in weeks) Column1: Time Interval Column2: Failure Intensity y = ­8.6744Ln(x) + 32.687 Column3: Predicted Number of Failures (Poisson Distribution) 3. Predicted 2. Actual 1. Time Week 2. Failure Intensity Failures/Week Failures/Week 1 32.687 33 29 2 26.6743641 27 28 3 23.15719756 23 23 4 20.66172819 21 22 5 18.72609177 19 19 6 17.14456166 17 19 34 2.097938265 2 1 35 1.846488775 2 1 36 1.60212332 2 1 37 1.364453659 1 1 38 1.133122616 1 0 39 0.907800857 1 0 40 0.688184063 1 0 41 0.473990465 0 0 Total: 352 351 Table 4: Simulation 2 Results @RISK Detailed Statistics Performed By: BUBEVSKI Date: 06 August 2008 20:13:22 Description Total: / 3. Predicted Failures/Week Output 1 / 3. Predicted Failures/Week RiskPoisson($B$12) 41 / 3. Predicted Failures/Week RiskPoisson($B$52) Cell MC LOG Simulation !C53 MC Simulation C1!C12 MC Simulation C1!C52 Minimum 290 17 0 Maximum 416 54 4 Mean 351.747 32.946 0.492 Std Deviation 19.18116 5.743873 0.6902264 Variance 367.9169 32.99208 0.4764124 Skewness 0.1393759 0.1426186 1.447088 Kurtosis Target #1 (Value) Target #1 (Perc%) 3.035162 3.122335 5.290806 Name 351 0.514 Table 5: Simulation 2 Statistics
Page 26 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Galileo CDS Monte Carlo Simulation with Logarithmic Failure Intensity Function Based on 21 Weeks Data Predicted System's Data (41 Weeks) using Poisson Distribution Note: The mean of the Poisson Distribution is equal to the Failure Intensity Function value Column1: Time Interval Column2: Failure Intensity Function: y = ­8.8835Ln(x) + 32.149 Column3: Predicted Number of Failures (Poisson Distribution) 1. Time 2. Failure 3. Predicted Week Intensity Failures/Week Real Sorted Failures/Week 1 32.149 32 29 2 25.99142702 26 28 3 22.38947773 22 23 4 19.83385404 20 22 5 17.8515583 18 19 6 16.23190476 16 19 31 1.643174669 2 2 32 1.361135107 1 2 33 1.087775078 1 1 34 0.82257628 1 1 35 0.565065496 1 1 36 0.31480951 0 1 37 0.071410723 0 1 38 0 0 0 39 0 0 0 40 0 0 0 41 0 0 0 Total: 309.8860796 310 351 Table 6: Galileo Simulation 3 results @RISK Detailed Statistics Performed By: BUBEVSKI Date: 06 August 2008 16:38:50 265 1 / 3. Predicted Failures/Week RiskPoisson(ABS(B12)) MCLog21W Simulation!C12 17 41 / 3. Predicted Failures/Week RiskPoisson(ABS(B52)) MCLog21W Simulation!C52 0 369 52 6 310.015 32.3 0.84 17.20543 5.743953 0.9355615 Name Total: / 3. Predicted Failures/Week Description Output Cell MCLog21W Simulation!C53 Minimum Maximum Mean Std Deviation Variance 296.0268 32.99299 0.8752753 Skewness 0.1575875 0.3057922 1.116838 Kurtosis Target #1 (Value) Target #1 (Perc%) 3.058967 3.138245 4.476752 351 0.988 Table 7: Statistics for Galileo Simulation 3
Page 27 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Galileo CDS Monte Carlo Simulation with Logarithmic Failure Intensity Function Based on 23 Weeks Data Predicted System's Data (41 Weeks) using Poisson Distribution Note: The mean of the Poisson Distribution is equal to the Failure Intensity Function value Column1: Time Interval Column2: Failure Intensity Function: y = ­8.5579Ln(x) + 32.289 Column3: Predicted Number of Failures 2. Failure 3. Predicted Real Sorted 1. Time Week Intensity Failures/Week Failures/Week 1 32.289 32 2 26.35711574 26 3 22.88718589 23 4 20.42523149 20 5 18.51559129 19 35 1.862686825 2 36 1.621603277 2 37 1.387125595 1 38 1.158901404 1 39 0.936605789 1 40 0.71993852 1 41 0.50862161 1 Total: 347.955619 346 29 28 23 22 19 1 1 1 0 0 0 0 351 Table 8: Galileo Simulation 4 results @RISK Detailed Statistics Performed By: BUBEVSKI Date: 06 August 2008 16:56:35 Output 1 / 3. Predicted Failures/Week RiskPoisson(B12) 41 / 3. Predicted Failures/Week RiskPoisson(B52) MCLog23W Simulation!C53 MCLog23W Simulation!C12 MCLog23W Simulation!C52 275 14 0 414 51 3 346.367 32.052 0.478 18.60474 5.541842 0.6928174 346.1364 30.71201 0.479996 ­0.0413433 0.1567768 1.390907 3.174928 3.09322 4.522202 Name Total: / 3. Predicted Failures/Week Description Cell Minimum Maximum Mean Std Deviation Variance Skewness Kurtosis Target #1 (Value) Target #1 (Perc%) 351 0.603 Table 9: Statistics for Galileo Simulation 4
Page 28 of 32
October 2008
Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Project TRPC Current Release(i) Data 1. New Code New Code Size KLOC: New Code Operation Defects in 40 Weeks: New Code Test & Fix Effort Man­Days: New Code Defects Found in Test & Fixed: Total New Code Defects Total New Code Defects per 1 KLOC 29 342 1860 1041 1383 47.68965517 New Code Testing Phase Defect Profile Effort per Defects Removed Effort Defect 271 250 0.922509225 185 280 1.513513514 462 870 1.883116883 123 460 3.739837398 1041 1860 1.786743516 Testing Phase Component Test Component Integration Test System Integration Test User Acceptance Test Total: 2. Changed Code Changed Code Size KLOC: Changed Code Operation Defects in 40 Weeks: Changed Code Test & Fix Effort Man­Days: Changed Code Defects Found in Test & Fixed: Total Changed Code Defects Total Changed Code Defects per 1 KLOC 16 116 955 355 471 29.4375 Changed Code Testing Phase Defect Profile Effort per Defects Removed Effort Defect 91 130 1.428571429 59 145 2.457627119 151 450 2.98013245 54 230 4.259259259 355 955 2.690140845 Testing Phase Component Test Component Integration Test System Integration Test User Acceptance Test Total: 3. Release Quality in Operation Operation Hours per Day Operation Days per Week Operation Time Period (Weeks) Operation Time Period (Hours) Total Number of Defects in Operation Quality: Mean Time To Failure (MTTF) in Hours Table 10: Current Release TRPC Project Data
Page 29 of 32
October 2008
24 7 40 6720 458 14.67248908 Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ Project TRPC Next Release(i+1) Data 1. New Code New Code Size KLOC: New Code Operation Defects in 40 Weeks: New Code Test & Fix Effort Man­Days: New Code Defects Found in Test & Fixed: Total New Code Defects Total New Code Defects per 1 KLOC 38 489 2460 1365 1854 48.78947368 New Code Testing Phase Defect Profile Defects Removed Testing Phase Component Test Component Integration Test System Integration Test User Acceptance Test Total: 365 246 612 142 1365 2. Changed Code Changed Code Size KLOC: Changed Code Operation Defects in 40 Weeks: Changed Code Test & Fix Effort Man­Days: Changed Code Defects Found in Test & Fixed: Total Changed Code Defects Total Changed Code Defects per 1 KLOC Effort per Effort Defect 330 0.904109589 370 1.504065041 1180 1.928104575 580 4.084507042 2460 1.802197802 26 201 1580 571 772 29.69230769 Changed Code Testing Phase Defect Profile Defects Removed Testing Phase Component Test Component Integration Test System Integration Test User Acceptance Test Total: 151 105 248 67 571 3. Release Quality in Operation Operation Hours per Day Operation Days per Week Operation Time Period (Weeks) Operation Time Period (Hours) Total Number of Defects in Operation Quality: Mean Time To Failure (MTTF) in Hours Table 11: Next Release TRPC Project Data
Page 30 of 32
October 2008
24 7 40 6720 690 9.739130435 Effort per Effort Defect 210 1.390728477 240 2.285714286 745 3.004032258 385 5.746268657 1580 2.767075306 Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, th th 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 & 14 , 2008 ___________________________________________________________________________________________ References 1. [1] Lyu, Michael R., “Handbook of Software Reliability Engineering”, IEEE Computer Society Press, 1996. 2. [2] Von Mayrhauser, A., et al., “On the need for simulation for better characterization of software reliability”, Proceedings, Fourth International Symposium on Software Reliability Engineering, 1993. 3. [3] Tausworthe, Robert C., Lyu, Michael R., “A Generalized Software Reliability Process Simulation Technique and Tool”. 4. [4] Gokhale, Swapna S., Lyu, Michael R., “A Simulation Approach to Structure­Based Software Reliability Analysis”. 5. [5] Gokhale, Swapna S., Lyu, Michael R., Trivedi, Kishor S., “Reliability Simulation of Fault­Tolerant Software and Systems”, Proc. of Pacific Rim International Symposium on Fault­Tolerant Systems, 1997. 6. [6] Gokhale, Swapna S., Lyu, Michael R., Trivedi, Kishor S., “Reliability Simulation of Component­Based Software Systems”, Proceedings of Ninth International Symposium on Software Reliability Engineering, 1998. 7. [7] Tausworthe, Robert C., Lyu, Michael R., “Software Reliability Simulation” Chapter 16, Handbook of Software Reliability Engineering, IEEE Computer Society Press, 1996. 8. [8] Software Engineering Institute, “The Capability Maturity Model for Software”, Version 1.1, Carnegie Mellon University, 1998. 9. [9] Lakey, Peter B., “Software Reliability Prediction is not a Science… Yet”, Cognitive Concepts, St. Louis, 2002.
Page 31 of 32
October 2008
Thank You
TCS Confidential
Download