Main considerations in experimental design Design of experiments relies heavily on the statistical principles discussed in previous section. At the outset it is necessary to emphasise the statistical principles involved in a consolidated manner. Two basic principles involved are randomisation and replication. Also blocking may be considered as an additional principle involved. Randomization is a key issue in a designed experiment which makes sure that responses and errors are independently distributed random variables. This is one of the important assumptions for statistical methods. Randomization should be done at each stage of experiment with regard to test specimens, lubricant formulations, and testing order. It evenly distributes the experimental error in design space leading to unbiased results. Random order can be obtained either by using random number tables or software. Replication means performing the experiment many times independently at selected experimental design points. Replication may be considered as a repetition that is made in random order. For example consider 5 experiments repeated twice at five selected conditions involving a total of 10 experiments. If we randomise all the ten tests the repetitions are done in random order and not one after the other. It differs from repetition in which the tests are done one after the other at the given condition. For example hardness measurement at a given condition three times is a repetition. Here the same test is done three times in sequence. Replication serves mainly two purposes: (i) Estimation of experimental error on which analysis of designed experiment is based. (ii) Increases the ability to differentiate between the influence of various factors due to increase in sample size as discussed in earlier section. Blocking may be considered as a technique that reduces the influence of nuisance factors. For example if wear comparisons are done between two lubricants in a reciprocating tester roughness variations in test pieces may influence the performance. On the other hand if we restrict the roughness to a specific value the two lubricants can be clearly distinguished. In effect we have blocked a factor and this is referred to as blocking. Roughness is a nuisance factor because in our study we are not interested in its influence on wear. In a broader sense blocking involves special methodology to minimise the influence of nuisance factor. This enables comparison between interested factors more clear. Let us consider the same example in which one wants to know whether two lubricants are different in terms of wear by conducting experiments in a reciprocating rig. If the roughness of different lower test specimens vary too much it will create a noise in comparing lubricants as roughness affects wear. If we treat each specimen as a block and conduct two tests in a 2 reciprocating rig with both lubricants on the same specimen at different locations in a random way the variability due to roughness will be reduced. For analysis, t 01 test statistic mentioned in Table 6.1 may be used. It can be computed by recording data as d i y1i y 2i where y1i and y 2i are the wear coefficients due to lubricants 1 and 2 of ith specimen. The null and alternative hypotheses can be taken as H0: d 0 and H1: d 0 respectively. If one wants to compare more than two lubricants randomized complete block design should be used which have not been discussed in the book. The details of use of blocking in experimental design may be obtained from literature. In blocking randomisation should be done within the block. Also blocking is effective only if the variability within the block is less than between blocks. If this condition is not satisfied then blocking will be wrong choice and test should be conducted in a completely random way in the entire design space. Factorial design This section first compares one-factor-at-a-time approach with factorial design. The coding formulae for levels of a factor are also given which is normally used in modelling. Analysis of variance (ANOVA) is then discussed. Comparison between one-factor-at-a-time and factorial design experiments In the usual one-factor-at-a-time-approach, one factor is varied while other factors are kept constant. In DOE factorial design is used so that all possible combinations of the levels of the factors are investigated in the experiment. Fig. 6.2 shows the comparison between two approaches. In this figure coded value of five levels for concentrations of two additives ZDDP and MoDTC are shown in horizontal and vertical axes. The five levels are -1.4, -1.0, 0, 1.0 and 1.4. The coding values in this example can be decoded in % (w/v) from the formula given below. xi zi z0 Δz (6.14) where x i = Coded value of additive concentration at ith experiment z i = Additive concentration in % (w/v) at ith experiment z 0 = The additive concentration in % (w/v) at mid level i.e. at centre point z z min z z min z max z min z = The step value = max = max = x max x min 1.4 (1.4) 2.8 3 z max , z min = Maximum and minimum values of additive concentrations in % (w/v) x max , x min = Maximum and minimum values of additive concentrations in coded form In the above coding variation in the range of levels is linear. The nonlinear variation can also be taken into account. One form is given below. log e z max log e z min log e z i log e z 0 where z x max x min Δz MoDTC MoDTC xi 1.4 1.0 1.4 1.0 ZDDP -1.4 -1.0 0 (a) 1.0 1.4 (6.15) ZDDP -1.4 -1.0 0 -1.0 -1.0 -1.4 -1.4 1.0 1.4 (b) Fig. 6.2. Comparison between one-factor-at-a-time and factorial design approaches (a) Experimental points for two factors with five levels in one-factor-at-a-time experiment (b) Experimental points for two factors with five levels in factorial design experiment Fig. 6.2 (a) depicts one-factor-at-a-time method. Using this method the output response of wear is observed by varying concentration of one additive keeping concentration of other additive constant at central level. Total 9 experiments have to be performed. The solid circular points in the figure are the experimental points. For both additives if the experimenter wants to observe the variation in wear five times in the entire range total number of experiments will be 45. Now suppose 52 full factorial design is used in which the number of experiments to be performed by factorial deign = levelfactor . Therefore, number of experiments to be performed is 52 = 25. Fig. 6.2 (b) shows the above design. From the figure we can easily see that one can obtain variation in wear five times on varying concentration of one additive from -1.4 to 1.4 while keeping concentration of other additive constant at different level of -1.4, -1, 0, 1 and 1.4. In the second approach we have to perform only 25 experiments compared to 45 in the first approach for the same objective. The second 4 approach has also the advantage of studying interaction of factors. In the present example one can see whether there is synergy between two additives in terms of antiwear property. Analysis of variance (ANOVA) for factorial design The ANOVA is used to quantify the significance of factors and their interactions. In order to understand the concept behind it, consider the example of 52 factorial design given earlier with two times repetition of experiments at all 25 combinations of levels of two factors. The total 50 experiments have to be performed in a randomized way so that experimental error is uniformly distributed in the design space. Table 6.2 shows the wear rate wijk at different combinations of factors’ level and replication. The nomenclature of symbols used in table are given after the Table 6.2. Table 6.2. Two factor 52 factorial design having two replications with wear rate as response Factor 1: ZDDP Concentration (coded value) -1.4 -1.0 0 1.0 1.4 Sum of wear rates in column Average wear rate in column w111 w112 Factor 2: MoDTC Concentration (coded value) Sum of -1.0 0 1.0 1.4 wear rates in row w121 w131 w141 w151 w122 w132 w142 w152 Wf 1 w11 w12 w13 w14 w15 w211 w212 w221 w222 w231 w232 w241 w242 w251 w252 w21 w22 w23 w24 w25 w311 w312 w321 w322 w331 w332 w341 w342 w351 w352 w31 w32 w33 w34 w35 w411 w412 w421 w422 w431 w432 w441 w442 w451 w452 w41 w42 w43 w44 w45 w511 w512 w521 w522 w531 w532 w541 w542 w551 w552 w51 w52 w53 w54 w55 W f 21 W f2 2 Wf2 3 W f2 4 Wf2 5 -1.4 w f 21 w f2 2 w f2 3 wf2 4 w f2 5 1 Average wear rate in row w f11 W f1 2 w f1 2 W f1 3 w f1 3 W f1 4 w f1 4 W f1 5 w f1 5 Overall sum of wear rate =W Overall average wear rate =w 5 where, wijk = wear rate at ith level of factor 1, jth level of factor 2 and kth replication. l1 l2 n wij 1 i l1 ; 1 j l2 ; 1 k n = number of levels in factor 1. In the present example it is 5 = number of levels in factor 2. In the present example it is 5 = number of replications. In the present example it is 2 = average wear rate at ijth cell. It represents the expected wear rate due to combined effect of factor 1, factor 2 and interaction at ijth combination of level of factors in replicated space l2 W f 1i = sum of wear rates in ith row = n w ijk j k l1 n w W f 2 j = sum of wear rates in jh column = ijk i k w f1i = average wear rate in ith row = W f1i /(l2n) representing the expected w f2 j wear rate due to factor 1 at ith level in design space of factor2 = average wear rate in jth column = W f 2 j /(l1n) representing the expected wear rate due to factor 2 at jth level in design space of factor1 l1 W = overall sum of wear rates = l2 n w ijk i 1 j 1 k 1 w = overall average wear rate = W/(l1 l2 n). It represents expected wear rate due to factor 1, factor 2, interaction and experimental error in entire design space The variability in wear rates in the entire 50 experiments quantified by total sum of squares SST expressed in Eqn. (6.16) is due to various influences. These are the concentration effects of ZDDP and MoDTC, their interactions, and experimental errors. l1 Total sum of squares SST = l2 n (w ijk w )2 (6.16) i 1 j 1 k 1 SST may be expressed as summation of sum of squares (SS) due to various factors mentioned above. It is the decomposition of total variability into various components which is expressed as SST = SS1 + SS2 + SSinteraction + SSE (6.17) 6 The above decomposition can also be expressed as summation of SS due to combined effect of factor 1, factor 2, their interaction, and experimental error. SST = SS1+2+interaction + SSE (6.18) where, l1 = Sum of squares due to factor1: ZDDP = l2 n SS1 (w f 1i w )2 i 1 l2 = Sum of squares due to factor2: MoDTC = l1n SS2 (w f2 j w )2 j 1 SS1+2+interaction = Sum of squares due to factor1, factor2 and interaction l1 = n l2 (w ij w )2 i 1 j 1 SSinteraction SSE = Sum of squares due to interaction between factors = SS1+2+interaction – SS1 – SS2 = Sum of squares due to experimental error l1 = l2 n (w ijk wij ) 2 i 1 j 1 k 1 Now significance test can be done by generating ANOVA Table 6.3. In this table the mean squares are obtained by dividing SS with degree of freedom. It estimates the variance of source. In ANOVA the variance of factor is compared with variance of experimental error. If F-ratio is greater than Fα, ν1 , ν 2 obtained from F-distribution table available in statistical handbooks the factor is called significant. The value is the level of significance, 1 and 2 are degree of freedom of factor and experimental error respectively. The level of significance normally taken as 0.05 which signifies that the chances of error in conclusion of significance is 5%. Table 6.3 ANOVA table for wear rate data of two factor 52 factorial design Source Factor 1: ZDDP Factor 2: MoDTC Sum of squares Degree of freedom Mean squares F-ratio SS1 l1 – 1 MS1 MS1/MSE SS2 l2 – 1 MS2 MS2/MSE Interaction SSinteraction Error Total SSE SST (l1 l2 – 1) – ( l1 – 1) – ( l2 – 1) = (l1 – 1) (l2 – 1) l1 l2(n – 1) l1 l2n – 1 MSinteraction MSE MSinteraction/MSE 7 Fractional factorial design The full factorial design is not economically viable if number of factors increases. In that case fractional factorial design is used which is a fraction of full factorial design. The scheme to obtain the design and inherent aliasing of factorial effect are discussed below. The last subsection discusses the dealiasing techniques and an appropriate methodology for selection of fractional design. Fractional factorial design – 2 levels 2-level factorial design is expressed as +1 and –1 levels. Total number of experiments is 2k where k is number of factors. As number of factors increases number of experiments increases rapidly. It should be noted here that out of 2k – 1 k! k! degree of freedom (df) only k C1 and k C 2 df are used for 1!(k 1)! 2!(k 1)! main and two factor interactions. Other df obtained from doing large number of experiments is for higher order interaction terms that are normally insignificant. Hence fractional factorial design can be constructed so that expected levels of interaction only are included. The two level fractional design is constructed by choosing proper generator/s I = ab…c which has all levels of +1 or –1. ab…c represents multiplication of corresponding level of three or more factors a, b, …, and c . The examples shown in this section have generator/s of +1 levels. In 1 2 p th fraction of 2 k design which is expressed as 2 k - p , p independent generators are required to construct 2 k - p design. Out of these p generators the generator Ismallest having smallest number of factors defines the resolution of the design. The number of factors in Ismallest, n(Ismallest) is called as resolution of the design expressed in roman number. Let us consider 2 5- 2 design shown in Table 6.4. This design has 8 combinations of levels. First three factors have levels according to 23 full factorial design, 4th factor’s levels are found by multiplying 1st and 2nd column, 5th factor’s levels are found by multiplying 1st and 3rd column. 4th and 5th column generation in short may be expressed in equation form as 4 = 12 and 5 = 13. It gives two generators I = 124 and I = 135 respectively on multiplying 4 on both sides of first relation 4 = 12 and 5 on both sides of second relation 5 = 13 as 42 and 52 will give level +1 level. This design is called resolution III design as both generators in this design have same number of factors 3 so n(Ismallest) = 3 . From the relations 4 = 12 and 5 = 13 we see that it will be difficult to distinguish between main effect of 4 and interaction effect between 1& 2. Similarly it will be difficult to find whether the behaviour in response is due to factor 5 or due to interaction between 1&3. Factor 4 is said to be aliased with 12 interaction 8 and factor 5 is aliased with 13 interaction. The word alias refers to the situation where interaction between a given number of factors gets confounded with other interactions or main factors. All aliases for this design can be obtained from the above two generators and are given below. 1 = 24 = 35 = 12345, 2 = 14 = 345 = 1235, 3 = 15 = 245 = 1234, 4 = 12 = 235 = 1345, 5 = 13 = 234 = 1245, 23 = 45 = 125 = 134, 25 = 34 = 123 = 145 From the above aliases we can say that the above resolution III design is suitable only if the interest is to find the main factor effects. This is so provided it is known that two-factor and higher order interactions are insignificant. Other resolution III design can be generated by choosing any 2 independent generators from 6 generators: I = 124 = 134 = 234 = 125 = 135 = 235. th Table 6.4. Construction of 1 4 fraction of 25 design i.e. 2 5- 2 design S.N. Factor 1 Factor 2 Factor 3 1 2 3 4 5 6 7 8 –1 +1 –1 +1 –1 +1 –1 +1 –1 –1 +1 +1 –1 –1 +1 +1 –1 –1 –1 –1 +1 +1 +1 +1 Factor 4 I = 124 4 = 12 +1 –1 –1 +1 +1 –1 –1 +1 Factor 5 I = 135 5 = 13 +1 –1 +1 –1 –1 +1 –1 +1 The ANOVA for above design can be obtained from the procedure given in previous section in which 7 effects: 1 + 24 + 35 + 12345, 2 + 14 + 345 + 1235, 3 + 15 + 245 + 1234, 4 + 12 + 235 + 1345, 5 + 13 + 234 + 1245, 23 + 45 + 125 + 134, 25 + 34 + 123 + 145 each having one df is estimated. Of course as stated earlier to do ANOVA the above design must have 2 replications to estimate experimental error. If the experimenters’ interest is to find main factor effect with confidence then resolution IV design is useful. In this design main factors are not aliased with two factor interaction. The example is 2 4 -1 design in which I = 1234 generator is used to get 4th factor’s levels. ANOVA of this design will estimate following seven effects in which each has one df: 1 + 234, 2 + 134, 3 + 124, 4 + 123, 12 + 34, 13 + 24, and 14 + 23. From these aliases we can say that if we have knowledge that three factor interactions are insignificant as normally happens, main factor effects can be investigated clearly. 9 To find the main factor and two factor interaction effect without any ambiguity the resolution V design is useful. In this design main factor and two factor interactions are not aliased with any main factor and two factor interaction. The example is 2 5 -1 design in which I = 12345 generator is used to get levels of 5th factor. ANOVA of this design will estimate the following 15 effects each having one df: 1 + 2345, 2 + 1345, 3 + 1245, 4 + 1235, 5 + 1234, 12 + 345, 13 + 245, 14 + 235, 15 + 234, 23 + 145, 24 + 135, 25 + 134, 34 + 123, 35 + 124, 45 + 123. These aliases clearly indicate that if higher order 3 interactions are insignificant the main and two factor interaction effect can be estimated with confidence. Fractional factorial design – 3 levels 3-level factorial design allows taking into account the response curvature if it exists in modelling. Three levels in this design are usually represented numerically by designating low, medium and high level as 0, 1 and 2. Fractional design in this case is developed by taking generator I as summation of level of factors. Sum value, s m3 in generator equation is represented as remainder of Sum/3. This is called modulus 3 (mod 3). The value of s m3 may be 0, 1 or 2. Let us consider 1/3 fraction of 33 factorial design represented as 33-1 design. Let the factors be represented by A, B and C. The generator I can be chosen from any four orthogonal components of three factor interactions: ABC, AB2 C , ABC 2 , and AB2 C 2 . The respective generators are x1 x 2 x 3 s m3 , x1 2 x 2 x 3 s m3 , x1 x 2 2x 3 s m3 , and x1 2 x 2 2 x 3 s m3 . Exponent 2 in B and C represent two times addition in generator equation hence expressed as coefficients in respective factors’ level x in the generator equation. As s m3 can be 0, 1 or 2 total 12 different 33-1 design can be generated. Let us select last generator with s m3 = 1. In 33-1 design first two factors A and B have levels according to 32 full factorial design. The levels of 3rd factor C are found from equation x1 2x 2 2x 3 1 (mod 3). As s m3 represents modulus 3, a multiple of 3 is added in right hand side of the equation whenever required in order to find the level of third factor. Following the above procedure the 9 different combinations of levels found are: 002, 011, 020, 100, 112, 121, 201, 210, and 222. All aliases of this design can be obtained from I and I2. ANOVA of this design estimates the following four effects each having two df: A+BC+ABC, B+ AC 2 + ABC 2 , C+ AB2 + AB2 C , and AB+AC + BC 2 . Seeing these aliases care should be taken in analysing the main effect. If the factors are of independent nature and does not have interaction with other factors this design is suitable to investigate the effect of factors with possible nonlinear nature of response. The modelling of response curvature can also be done by using RSM. The selection of 3-level design has been discussed in detail in chapter 7. 10 Dealiasing and selection of optimum fractional design The previous subsections on 2 and 3 level factorial designs did consider the issue of selection between alternate designs. The present subsection provides a more formal methodology. The best fractional design selection and ambiguity removal of aliased terms becomes easier if the following empirical principles are considered. These are hierarchy, sparsity, and heredity. Hierarchy principle states that the effect of factors having lower order has more probability to be significant than of higher order. It suggests that if resources are limited then priority should be given to lower order factor. Sparsity principle indicates that out of several variables the process is likely to be affected by few factors only. Heredity principle says that interaction terms can be significant only if at least one of its parent factors is significant. Dealiasing Technique The ambiguity between aliased factors can be removed by following considerations given below: (i) If factors of different orders are aliased then using hierarchy principle higher order factors may be ignored in comparison to lower order. (ii) If the aliased factors are of same order then prior knowledge of non significant factor may dealiase the effect. (iii) In study of whole process the factors can be grouped into different subprocesses. Taking wear process as an example lubricant formulation, surface formation, and operating conditions are the subprocesses. The factor interaction within subprocesses have more probability to be significant than factor interaction between subprocesses. (iv) More experiments other than the original can be planned that provide additional information to dispel the confusion among aliased factors. The details of conducting such experiments are available in literature. Optimum selection of fractional design Selection of optimum fraction is normally obtained by using two criteria namely (i) maximum resolution, and (ii) minimum aberration. In order to understand these criteria let us define a pattern vector W (A 3 , A 4 ,..., A k ) for 2 k - p design. In this 11 vector A i is the number of generators having ‘i’ factors. The smallest r, such that Ar 1, is known as resolution of a 2 k - p design. The maximum resolution criterion selects the fractional design which has maximum resolution. Minimum aberration criterion states that for any two 2 k - p designs d1 and d2, let r be the smallest integer such that Ar(d1) Ar(d2). Then d1 is said to have less aberration than d2 if Ar(d1) < Ar(d2). The reason is that large number of generators having ‘r’ factors will give more aliases hence more aberration. If there is no design with less aberration than d1 then d1 has minimum aberration. The best fractional design depends mainly on the objective of experiments. The following tabulation of some important general objectives, recommendations, and its limitations may help in selection and design of fractional experiments. Table 6.5. Recommendation and limitation to select a fractional design for different objectives Objectives Recommendation to select fractional design Linear modelling of response Nonlinear modelling of response Designs having 2-levels Screening of main factors Design having III resolution Clear effect of main factors Design having IV resolution** Clear effect of main factors and two factor interactions Design having V resolution Limitation Designs having 3-levels Factors having 2 order are insignificant* Factors having 3 order are insignificant*** Factors having 3 order are insignificant*** Some 2 k - p designs may be generated by selecting generators in such a way that resolution III design give some clear main effects along with some clear two factor interaction while resolution IV design give all clear main effects without any clear two factor interaction. Now if objective is to investigate some main effects along with some two factor interaction rather all main effects then resolution III design will be best choice compared to resolution IV design. Maximum resolution criteria in this case will give wrong recommendation as stated in. * th Among many resolution IV designs with given factor k and 1 2 p fraction, design having largest number of clear two-factor interactions are best. For this case minimum aberration criteria does not give better design. ** *** Three factor interaction usually remains insignificant and the special cases where they are significant are considered in the last section. 12 Regression modelling The results of a designed experiment can be used to develop an empirical model that relates the input variables with the response. Such a model is useful for prediction and control. This modelling is usually done by regression analysis. This model also helps in selecting conditions that lead to optimum response. The polynomial model normally used is given below. k y b0 b x i i ε (6.19) i 1 where, y = response of the experiment b i = coefficients found by regression, 0 i k k = number of regressor variables x i = regressor representing a linear e.g. main factors or non linear variables e.g. interaction and quadratic terms of main factors. ε = error or residual Although in the above model variables may be linear or nonlinear, the model is linear in coefficients b i and is called linear regression model. The coefficients are found by linear least square method which minimises the error ε . In order to develop the method to find the coefficients let us express the model in terms of all experimental observations in matrix form as Y Xb ε (6.20) where, Y Column matrix of ‘N’ observed responses having order Nx1 y1 y 2 . . . y N T X Regressor variable matrix of order Nxp where p = k+1 represents total number of coefficients in the model including constant th 1 x11 x12 . . . x1k where x ij represents level of j th 1 x x 22 . . . x 2 k regressor variable at i experiment. 21 . It should be noted that if the levels . are expressed in coded form in the . model relative importance of the factor can be easily deduced. 1 x N1 x N 2 . . . x Nk b Coefficient matrix of order px1 b 0 b1 . . . b k T ε Column matrix of errors or residuals at ‘N’ experiments of order Nx1 13 1 2 . . . N T The criteria used in least square method to find the coefficients is to minimize the deviation of predicted value from experimental value of response. This is achieved by minimizing the following function N f= 2 i Y Xb T Y Xb (6.21) i 1 Using matrix operations the above equation can be expressed as f = Y T Y 2b T X T Y b T X T Xb (6.22) Using minimization principle i.e. the partial differentiation of function f with respect to unknown coefficients and equating to zero will give the relationship 2X T Y 2X T Xb 0 . Rearranging it gives the coefficient matrix as b XT X 1 XT Y (6.23) The above equation can also be used in the model expressed as y b 0 x1b1 x 2 b2 x 3 b3 (6.24) where x1 , x 2 , and x 3 are main factors. Taking log of both sides the model becomes log e y log e b 0 b1log e x1 b 2 log e x 2 b 3log e x 3 (6.25) If we take log e y as response then this equation is linear in coefficients log e b 0 , b1 , b 2 , and b 3 hence Eqn.(6.24) and (6.25) represent linear regression model. Therefore linear least square equation Eqn. (6.23) can be used to estimate coefficients of the above model. In some engineering applications model may be nonlinear in the coefficients. One form is given as y b 0 (1 x1b1 ) b 2 x 2 (6.26) The coefficients in the above equations are determined by using non-linear least square method. One such method developed by authors [12] has been used in characterization of wear which is discussed in the next chapter. 14 Significance of the model Significance of the model is tested by doing variance analysis. The ANOVA table is given in Table 6.6. The nomenclature of symbols used are given after the table. Table 6.6. ANOVA table for regression model Sum of squares Source Degree of freedom (df) Mean Squares Regression SSreg k MSreg SSreg k Residual SSres N–k–1 MSres SS res ( N k 1) Lack of fit SSlof Pure error SS pe df lof df res df pe MSlof SSlof df lof F-ratio MSreg MSres MSlof MSpe np (n 1) N n i MSpe SSpe ( N n p ) p i 1 SST Total N–1 where, SSreg = Sum of squares (SS) due to regression model = b T X T Y g 2 N . where g is grand total of all experimental observations y, and N is total number of observations. b, X, and Y have already been defined in Eqn. (6.20) SSres = Residual SS representing sum of squared deviation between experimental value and predicted value f as defined in Eqn. (6.21) n p , n i , N = Number of experimental points, number of replication/s at ith experimental point, and total number of experiments respectively np SSpe = SS due to pure error = ni (y ij y i ) 2 . y ij is the observation at i 1 j1 th SSlof i experimental point and jth replicate, y i is the average of observations at ith experimental point = SS for lack of fit representing weighted sum of squared deviation between expected response y i at ith location and predicted value np ŷ i at this location = n (y i i 1 i ŷ i ) 2 15 SST = Y T Y g 2 N It should be noted that in the above ANOVA table SST is partitioned into SS reg and SSres while SSres is further partitioned into SSlof and SSpe . This information can be expressed by the following two equations. SST = SS reg + SSres (6.27) SSres = SSlof + SSpe (6.28) Model is tested in two ways: (i) Significance of regression is tested with reference to residual i.e. MSres . If F-ratio is found to be greater than F, k, N k 1 the model is said to be significant which means out of k regressor variables at least one regressor is significant. (ii) Non significance of lack of fit is tested with reference to pure error i.e. MSpe . If F-ratio is found to be less than F, df lof , df pe then model is linear. If lack of fit is found to be significant then model should be discarded and another model should be tried out. Individual coefficients are tested by computing F-ratio using the formula Fi b i 2 Cii where Cii is the diagonal element of (X T X) 1 corresponding to ith MSres regressor variable x i . If Fi the F-ratio of ith coefficient of b i corresponding to regressor variable x i is found to be greater than F,1, N k 1 then that coefficient is said to be significant. In other words it can be said that variable x i is significant. It should be noted that this is not a confirmatory test as value of coefficient depends on the other regressor variables in the model. Testing the adequacy of the model Adequacy of the model is checked by plotting residuals, computing externally studentised residual t i of all experiments, quantifying and comparing R-squared 2 R 2 , adjusted R-squared R adj , and measuring the predictive capability of the model R 2prediction . 16 Residual plots are helpful in checking whether the residuals are normally distributed as the testing hypothesis is based on this assumption. Another information required is to find outliers. The outliers are the experimental points at which residual y i ŷ i goes beyond the specified limit e.g. 3 MSE . The best way to find the outlier is to compute externally studentised residual t i at all experimental points and check outlier through hypothesis testing described in section 6.2 as the statistic t i is t-distributed random variable with N-p-1 degree of freedom. The formula to calculate t i is given below ti y i ŷ i 2 (1 h ii )S(i) (6.29) where, h ii = Diagonal element of hat matrix H of order NxN defined as H X(X T X) 1 X T . It can be easily shown that predicted response column matrix Ŷ is easily obtained from Ŷ HY . 2 = Estimate of variance of residual 2 by removing the ith observation S(i) = N pMSres yi ŷi 2 1 h ii N p 1 The outliers found should be critically examined. This may be due to serious fault in experiments, measurements or wrong assumptions in modelling. How close the experimental observations are fitted with model is quantified by coefficient of determination R 2 and can be obtained from the following equation. R2 1 SSRes SST (6.30) By adding higher order terms whether it is significant or not R 2 value will increase but it does not mean that model is good. For good model having large number of 2 significant terms adjusted R-squared R adj should be estimated. In this statistic an adjustment is done for the corresponding df of SSres and SST . The equation is given below. 17 2 R adj 1 SSres ( N p) (N 1) 1 (1 R 2 ) SST ( N 1) (N p) (6.31) where N and p represent total no. of experiments and total number of coefficients in the model including constant respectively. The large difference in R 2 and R adj2 imply that nonsignificant terms have been added in the model. The capability of prediction of new observation by the model is quantified by R 2prediction which is obtained by the formula given below. R 2prediction 1 PRESS SST (6.32) where, N PRESS = Prediction sum of squares = yi ŷ i 1 h ii i 1 2 (6.33) 2 There should not be much difference between R adj and R 2prediction . The above terms described to check the adequacy of the model are given to understand the concept and are only indicative. One has to refer to the literature for more detailed analysis. The empirical model obtained must be verified by the experiments at other locations within design space.