Session 8
Overview
Forecasting Methods
• Exponential Smoothing
– Simple
– Trend (Holt’s Method)
– Seasonality (Winters’ Method)
• Regression
– Trend
– Seasonality
– Lagged Variables
Applied Regression -- Prof. Juran
2
Forecasting
1. Analysis of Historical Data
• Time Series (Extrapolation)
• Regression (Causal)
2. Projecting Historical Patterns into the Future
3. Measurement of Forecast Quality
Applied Regression -- Prof. Juran
3
Measuring Forecasting Errors
• Mean Absolute Error
• Mean Absolute Percent Error
• Root Mean Squared Error
• R-square
Applied Regression -- Prof. Juran
4
Mean Absolute Error
n
MAE
Applied Regression -- Prof. Juran


i
i1
n
5
Mean Absolute Percent Error
i
n
Y
i

MAPE 100% *
1
n

Or, alternatively  100% *
Applied Regression -- Prof. Juran
i 1
i
n
i
Yˆi
n
6
Root Mean Squared Error
n
RMSE
  
i


Applied Regression -- Prof. Juran
2
i1
n
SSE
n
7
R-Square
R
2
 1
Applied Regression -- Prof. Juran
SSE
TSS

SSR
TSS
8
Trend Analysis
• Part of the variation in Y is believed to
be “explained” by the passage of time
• Several convenient models available in
an Excel chart
Applied Regression -- Prof. Juran
9
Example: Revenues at GM
G M R e ve n u e
60000
50000
R evenu e
40000
30000
20000
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
10
You can “add chart element –
trendline”, and choose to
superimpose a trend line on the
graph.
Applied Regression -- Prof. Juran
11
G M R e ve n u e - L in e a r T re n d
60000
50000
R evenu e
40000
30000
20000
y = 3 4 0 .2 3 x + 3 1 8 6 2
2
R = 0 .6 6 1 8
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
12
G M R e ve n u e - L o g a rith m ic T re n d
60000
50000
R evenu e
40000
30000
20000
y = 5 1 6 2 .3 L n (x) + 2 4 9 3 7
2
R = 0 .6 6 0 1
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
13
G M R e ve n u e - P o lyn o m ia l T re n d
60000
50000
R evenu e
40000
30000
20000
2
y = -5 .6 1 2 1 x + 6 0 4 x + 2 9 7 5 2
2
R = 0 .6 8 7 2
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
14
G M R e ve n u e - P o w e r T re n d
60000
50000
R evenu e
40000
30000
20000
y = 26532x
0 .1 3 7 2
2
R = 0 .6 7 8 3
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
15
G M R e ve n u e - E x p o n e n tia l T re n d
60000
50000
R evenu e
40000
30000
20000
y = 32044e
0 .0 0 8 8 x
2
R = 0 .6 5 0 5
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
16
You can also show moving-average trend lines, although
showing the equation and R-square are no longer options:
G M R e ve n u e - 4 -P e rio d M o vin g A ve ra g e
60000
50000
R evenu e
40000
30000
20000
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
17
G M R e ve n u e - 3 -P e rio d M o vin g A ve ra g e
60000
50000
R evenu e
40000
30000
20000
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
18
G M R e ve n u e - 2 -P e rio d M o vin g A ve ra g e
60000
50000
R evenu e
40000
30000
20000
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
19
Simple Exponential Smoothing
B asically , th is m eth o d u ses a fo recast fo rm u la o f th e fo rm :
Ft  k
F o recast “k ” p erio d s in th e fu tu re
 Lt
= C u rren t “L ev el”
= W eigh ted C u rren t O bserv ed V alu e
+ W eigh ted P rev io u s L ev el
  Y t   1   L t  1
N o te th at th e w eig h ts m u st ad d u p to 1.0.
Applied Regression -- Prof. Juran
20
Why is it called “exponential”?
Lt
 L t  1    t 
  Y t    1   Y t  1    1  
 2 Y t  2    1    3 Y t  3  ...
See p. 918 in W&A for more details.
Applied Regression -- Prof. Juran
21
Example: GM Revenue
G M R e ve n u e
60000
50000
R evenu e
40000
30000
20000
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
22
In this spreadsheet model, the forecasts appear in column G.
A
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
B
A lpha
0.100
MAE
RMSE
MAPE
4014.376
4690.9738
11.148%
C
D
E
F
G M _R ev S m Lev el
1-91
29200
29200.0
= A V E R A G E (I3:I47)
2-91
31300
29410.0
= S Q R T (A V E R A G E (K 3:K 47))
3-91
28900
29359.0
= A V E R A G E (J3:J47)
4-91
33600
29783.1
1-92
32000
30004.8
2-92
35200
30524.3
3-92
29400
30411.9
4-92
35800
30950.7
1-93
35000
31355.6
2-93
36658
31885.9
3-93
30138
31711.1
4-93
37268
32266.8
1-94
37495
32789.6
2-94
40392
33549.8
3-94
34510
33645.8
4-94
42553
34536.6
G
F orecast
H
I
J
E rror abs(error) abs(% error)
K
L
M
error^2
29200.0
2100.0
2100.0
7.2%
4410000.0
29410.0
-510.0
510.0
1.7%
260100.0
29359.0
4241.0
4241.0
14.4% 17986081.0
29783.1
2216.9
2216.9
7.4%
4914645.6
= $B $1*E 7+ (1-$B $1)*F 6
30004.8
5195.2
5195.2
17.3% 26990206.8
30524.3
-1124.3
1124.3
3.7%
1264075.3
=F8
30411.9
5388.1
5388.1
17.7% 29031838.1
30950.7
4049.3
4049.3
13.1% 16396895.8
= E 11-G 11
31355.6
5302.4
5302.4
16.9% 28115204.6
31885.9
-1747.9
1747.9
5.5%
3055016.2
= A B S (H 13)
31711.1
5556.9
5556.9
17.5% 30879421.8
32266.8
5228.2
5228.2
16.2% 27334420.4
= A B S (H 15/G 15)
32789.6
7602.4
7602.4
23.2% 57796633.2
33549.8
960.2
960.2
2.9%
921924.0
= H 17^ 2
33645.8
8907.2
8907.2
26.5% 79337354.0
Note that our model assumes that there is no trend. We use a
default alpha of 0.10.
Applied Regression -- Prof. Juran
23
G M R e ve n u e - S im p le S m o o th in g (a lp h a 0 .1 0 )
60000
50000
R evenu e
40000
30000
20000
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
24
We use Solver to minimize RMSE by manipulating alpha.
A
1
A lpha
B
C
D
E
F
G M _R ev
S m Lev el
F orecast
3275.989
1-91
2-91
29200
31300
29200.0
29935.9
29200.0
2100.0
2100.0
7.2%
4410000.0
0.350
G
H
I
J
K
E rror abs(error)abs(% error)
error^2
2
3
MAE
4
RMSE
3653.2722
3-91
28900
29572.9
29935.9
-1035.9
1035.9
3.5%
1073072.4
5 MAPE
6
7
8
9
10
11
12
13
14
15
16
17
8.584%
4-91
1-92
2-92
3-92
4-92
1-93
2-93
3-93
4-93
1-94
2-94
3-94
4-94
33600
32000
35200
29400
35800
35000
36658
30138
37268
37495
40392
34510
42553
30984.1
31340.1
32692.7
31538.9
33032.1
33721.7
34750.6
33134.3
34582.8
35603.3
37281.4
36310.2
38497.9
29572.9
30984.1
31340.1
32692.7
31538.9
33032.1
33721.7
34750.6
33134.3
34582.8
35603.3
37281.4
36310.2
4027.1
1015.9
3859.9
-3292.7
4261.1
1967.9
2936.3
-4612.6
4133.7
2912.2
4788.7
-2771.4
6242.8
4027.1
1015.9
3859.9
3292.7
4261.1
1967.9
2936.3
4612.6
4133.7
2912.2
4788.7
2771.4
6242.8
13.6%
3.3%
12.3%
10.1%
13.5%
6.0%
8.7%
13.3%
12.5%
8.4%
13.5%
7.4%
17.2%
16217616.5
1032075.0
14898909.3
10841859.2
18157357.9
3872765.0
8621982.5
21276434.3
17087842.8
8480779.8
22931441.8
7680620.3
38972198.5
After optimizing, we see that alpha is 0.350 (instead of 0.10). This
makes an improvement in RMSE, from 4691 to 3653.
Applied Regression -- Prof. Juran
25
G M R e ve n u e - S im p le S m o o th in g (a lp h a 0 .3 5 )
60000
50000
R evenu e
40000
30000
20000
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
26
Exponential Smoothing with Trend:
Holt’s Method
Weighted Current Level
Ft  k
 L t  kT t
  Y t   1    L t 1  T t 1   k  
Weighted Current Observation
Applied Regression -- Prof. Juran
 L t
 L t 1   1   T t 1 
Weighted Current Trend
27
1
A
S m oothing constant(s)
2
3
Lev el (alpha)
T rend (beta)
B
C
D
E
G M _R ev
F
S m Lev el
0.266
0.048
1-91
2-91
29200
31300
29200.000
29757.957
28900
33600
29549.579
30637.081
4
5
MAE
3094.683
3-91
4-91
6
RMSE
3568.391
1-92
32000
31048.143
7 MAPE
8
9
10
11
12
13
14
15
16
17
8.01%
2-92
3-92
4-92
1-93
2-93
3-93
4-93
1-94
2-94
3-94
4-94
35200
29400
35800
35000
36658
30138
37268
37495
40392
34510
42553
32212.293
31564.039
32760.991
33465.944
34443.591
33457.270
34585.269
35507.934
36980.394
36542.128
38331.485
Applied Regression -- Prof. Juran
G
S m T rend
H
F orecast
I
J
K
E rror abs(error)abs(% error)
0.000
26.659
29200.000
2100.0
= $B $2*E 4+ (1-$B $2)*(F 3+ G 3)
15.429
29784.616
-884.6
66.652
29565.008
4035.0
= $B $3*(F 6-F 5)+ (1-$B $3)*G 5
83.108
30703.733
1296.3
134.760
97.348
149.887
176.407
214.690
157.306
203.686
238.038
297.019
261.887
334.869
31131.251
32347.053
31661.387
32910.877
33642.352
34658.282
33614.577
34788.955
35745.973
37277.413
36804.015
4068.7
= F 7+ G 7
-2947.1
4138.6
2089.1
3015.6
-4520.3
3653.4
2706.0
4646.0
-2767.4
5749.0
L
error^2
2100.0
7.2%
4410000.0
884.6
4035.0
3.0%
13.6%
782545.7
16281159.9
1296.3
4.2%
1680308.2
M
4068.7
13.1%
16554716.2
2947.1
9.1%
8685120.1
4138.6
13.1%
17128119.6
= E 10-H 10
2089.1
6.3%
4364433.2
3015.6
9.0%
9094133.3
= AB S (I12)
4520.3
13.0%
20432945.1
3653.4
10.9%
13347499.7
= AB S (I14/H 14)
2706.0
7.8%
7322680.1
4646.0
13.0%
21585567.4
= I16^ 2
2767.4
7.4%
7658572.1
5749.0
15.6%
33050827.6
28
N
G M R e ve n u e - H o lts M e th o d (S m o o th in g w ith T re n d )
60000
50000
R evenu e
40000
30000
20000
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Holt’s model with optimized smoothing constants. This model is slightly
better than the simple model (RMSE drops from 3653 to 3568).
Applied Regression -- Prof. Juran
29
Exponential Smoothing with Seasonality:
Winters’ Method
T h is m eth od in clu d es an e x plicit term for season ality , w h ere M is th e n u m b er of
p eriod s in a season . W e w ill u se M = 4 b ecau se w e h av e q u arterly d ata.
Yt
  1   L t  1  T t  1 
L ev el:
Lt

T ren d :
Tt
   L t  L t  1    1   T t  1
Season ality :
St

S tM
Yt
Lt
  1   S t  M
N ow , for an y tim e k p eriod s in th e fu tu re, th e forecast is giv en b y:
Ft  k
  L t  kT t S t  k  M 
N ote th at th e tren d term is ad d itiv e, an d th e season ality term is m u ltiplicativ e.
Applied Regression -- Prof. Juran
30
Weighted Current Seasonal Factor
St

Yt
Lt
  1   S t  M
Weighted Seasonal Factor from Last Year
Applied Regression -- Prof. Juran
31
N ow , for an y tim e k p eriod s in th e fu tu re, th e forecast is giv en b y:
Ft  k
  L t  kT t S t  k  M

N ote th at th e tren d term is ad d itiv e, an d th e season ality term is m u ltiplicativ e.
T h is is a little trick y at first, b ecau se w e n eed a few p eriod s of d ata to g et th e
m od el started . T h e first forecast h as n o trend in form ation ( so w e u se 0 as th e
d efau lt), and th e first fou r h av e n o season ality (so w e u se 1.0 as th e d efault).
Applied Regression -- Prof. Juran
32
1
A
S m o o th in g c o n s ta n t(s )
2
3
4
L e ve l (a lp h a )
T re n d (b e ta )
S e a s o n a lity (g a m m a )
5
6
7
B
C
D
E
G M _R ev
F
S m L e ve l
G
S m T re n d
H
Sm Season
0 .3 1 2
0 .0 3 7
0 .2 0 2
1 -9 1
2 -9 1
3 -9 1
29200
31300
28900
2 9 2 0 0 .0 0 0
2 9 8 5 5 .6 7 1
2 9 5 7 3 .9 1 7
0 .0 0 0
2 4 .1 7 8
1 2 .8 9 7
1 .0 0 0
1 .0 0 0
1 .0 0 0
MAE
2 6 7 0 .4 4 0
4 -9 1
1 -9 2
33600
32000
3 0 8 3 9 .8 2 8
3 1 2 4 2 .7 1 1
5 9 .1 0 2
7 1 .7 7 9
1 .0 0 0
1 .0 0 5
RMSE
3 2 3 3 .9 9 5
2 -9 2
35200
3 2 5 2 7 .6 4 2
8 MAPE
9
10
11
12
13
14
15
16
6 .8 2 %
3 -9 2
4 -9 2
1 -9 3
2 -9 3
3 -9 3
4 -9 3
1 -9 4
2 -9 4
3 -9 4
29400
35800
35000
36658
30138
37268
37495
40392
34510
3 1 6 3 1 .2 5 2
3 2 9 8 7 .2 8 4
3 3 6 4 9 .3 7 6
3 4 5 0 2 .6 2 1
3 3 3 9 4 .0 2 9
3 4 4 9 2 .7 5 0
3 5 4 0 1 .9 2 1
3 6 7 7 2 .0 1 1
3 6 5 7 0 .2 2 0
I
F o re c a s t
3 0 8 9 8 .9 3 1
J
K
L
E rro r a b s (e rro r) a b s (% e rro r)
1 1 0 1 .1
M
e rro r^2
1 1 0 1 .1
3 .6 %
1 2 1 2 3 5 3 .7
1 1 6 .5 1 5
1 .0 1 7
3 1 3 1 4 .4 9 1
3 8 8 5 .5
3 8 8 5 .5
= $ B $ 2 *(E 8 /H 4 )+ (1 -$ B $ 2 )*(F 7 + G 7 )
7 9 .1 6 4
0 .9 8 6
3 2 6 4 4 .1 5 7
-3 2 4 4 .2
3 2 4 4 .2
= $ B $ 3 *(F 9 -F 8 )+ (1 -$ B $ 3 )*G 8
1 2 6 .2 4 9
1 .0 1 7
3 1 7 1 0 .4 1 5
4 0 8 9 .6
4 0 8 9 .6
= $ B $ 4 *(E 1 0 /F 1 0 )+ (1 -$ B $ 4 )*(H 6 )
1 4 6 .0 0 8
1 .0 1 2
3 3 2 7 5 .3 9 8
1 7 2 4 .6
1 7 2 4 .6
= (F 1 0 + G 1 0 )*H 7
1 7 2 .0 8 8
1 .0 2 6
3 4 3 5 5 .3 1 6
2 3 0 2 .7
2 3 0 2 .7
1 2 4 .8 6 2
0 .9 6 9
3 4 1 8 1 .4 4 3
-4 0 4 3 .4
4 0 4 3 .4
1 6 0 .7 7 4
1 .0 3 0
3 4 0 9 5 .2 6 6
3 1 7 2 .7
3 1 7 2 .7
1 8 8 .3 7 1
1 .0 2 2
3 5 0 6 9 .2 6 1
2 4 2 5 .7
2 4 2 5 .7
2 3 1 .9 4 8
1 .0 4 0
3 6 5 0 9 .4 1 8
3 8 8 2 .6
3 8 8 2 .6
2 1 5 .9 5 4
0 .9 6 4
3 5 8 5 6 .0 9 8
-1 3 4 6 .1
1 3 4 6 .1
1 2 .4 %
1 5 0 9 7 1 8 1 .1
9 .9 %
1 2 .9 %
5 .2 %
6 .7 %
1 1 .8 %
9 .3 %
6 .9 %
1 0 .6 %
3 .8 %
1 0 5 2 4 5 5 3 .4
1 6 7 2 4 7 0 1 .7
2 9 7 4 2 5 2 .4
5 3 0 2 3 5 1 .4
1 6 3 4 9 4 3 3 .8
1 0 0 6 6 2 4 2 .9
5 8 8 4 2 0 8 .4
1 5 0 7 4 4 4 5 .5
1 8 1 1 9 8 0 .8
Winters’ model with optimized smoothing constants. This model is better
than the simple model and the Holt’s model (as measured by RMSE).
Applied Regression -- Prof. Juran
33
G M R e ve n u e - W in te rs M e th o d (S m o o th in g w ith T re n d a n d S e a s o n a lity)
60000
50000
R evenu e
40000
30000
20000
10000
0
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
Applied Regression -- Prof. Juran
34
Forecasting with Regression
A
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
1-91
2-91
3-91
4-91
1-92
2-92
3-92
4-92
1-93
2-93
3-93
4-93
1-94
2-94
3-94
4-94
1-95
2-95
B
C
G M _R ev
29200
31300
28900
33600
32000
35200
29400
35800
35000
36658
30138
37268
37495
40392
34510
42553
43285
42204
G M _E P S
-1.28
-1.44
-1.88
-4.25
-0.53
-1.18
-1.86
-1.25
0.42
0.92
-0.49
1.28
1.86
2.23
0.4
1.74
2.51
2.39
Applied Regression -- Prof. Juran
D
E
F
T rend
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
G
1Q
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
H
2Q
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
3Q
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
35
Applied Regression -- Prof. Juran
36
B
65
66
67
68
69
70
C
D
E
F
G
R egression S tatistics
M ultiple R
0.8852
R S quare
0.7835
A djusted R S quare
0.7624
S tandard E rror
2736.1392
O bserv ations
46
71
72 A N O V A
73
74 R egression
75 R esidual
76 T otal
df
SS
MS
4 1111067275.3109 277766818.8277
41
306944777.4065
7486457.9855
45 1418012052.7174
F
37.1026
S ignificance F
0.0000
77
78
79
80
81
82
83
Intercept
T rend
1Q
2Q
3Q
C oefficients
33286.7628
335.8508
-1289.9144
423.4015
-4582.6038
Applied Regression -- Prof. Juran
S tandard E rror
1101.4629
30.4091
1142.5337
1142.1290
1167.0899
t S tat
30.2205
11.0444
-1.1290
0.3707
-3.9265
P -value
0.0000
0.0000
0.2655
0.7128
0.0003
37
Which Method is Better?
The most reasonable statistic for comparison is probably RMSE
for smoothing models vs. standard error for regression models,
as is reported here:
Revenue
M a tte l
R e g re ssio n
W in te rs'
$ 9 9 .3 7
$ 7 6 .4 4
M cD
L illy
GM
M S FT
A TT
N ik e
GE
Coke
F o rd
$ 1 1 2 .23 $ 1 0 9 .22 $ 2 ,7 3 6 .14 $ 1 5 4 .25 $ 1 2 ,8 36 .4 1 $ 2 7 9 .45 $ 1 ,1 6 4 .02 $ 1 6 4 .20 $ 9 6 9 .14
$ 8 4 .9 2 $ 1 3 5 .33 $ 3 2 34 .0 0 $ 1 0 3 .91 $ 1 4 ,6 22 .2 6 $ 1 9 1 .94 $ 1 ,1 8 4 .06 $ 2 5 8 .02 $ 1 ,6 4 8 .61
EPS
R e g re ssio n $ 0 .0 8 74 $ 0 .0 2 05 $ 0 .3 7 27
W in te rs'
$ 0 .1 3 27 $ 0 .0 2 95 $ 0 .5 2 85
$ 1 .3 8 82
$ 0 .0 6 87
$ 0 .6 8 79
$ 0 .2 0 80
$ 0 .6 5 91
$ 0 .0 1 64
$ 0 .4 9 88
$ 1 .2 6 03
$ 0 .0 6 35
$ 0 .5 7 19
$ 0 .2 6 87
$ 0 .7 5 87
$ 0 .0 2 28
$ 1 .3 4 75
The regression models are superior most of the time (6 out of 10
revenue models and 7 out of 10 EPS models).
Applied Regression -- Prof. Juran
38
F o r G M , a regressio n m o d el seem s b est fo r fo recas tin g rev en u e, b u t a W in ters
m o d el seem s best fo r earn in g s:
G M R e ve n u e - R e g re s s io n
60000
50000
50000
40000
40000
Re v e n u e
Re v e n u e
G M R e ve n u e - W in te rs M e th o d (S m o o th in g w ith T re n d a n d S e a s o n a lity)
60000
30000
30000
20000
20000
10000
10000
0
0
1 -9 1
1 -9 2
1 -9 3
1 -9 4
1 -9 5
1 -9 6
1 -9 7
1 -9 8
1 -9 9
1 -0 0
1 -0 1
1 -0 2
1 -9 1
1 -9 2
1 -9 3
1 -9 4
1 -9 5
1 -9 6
Q u a rte rs
G M E P S - W in te rs M e th o d (S m o o th in g w ith T re n d a n d S e a s o n a lity)
1 -9 8
1 -9 9
1 -0 0
1 -0 1
1 -0 2
G M E P S - R e g re s s io n
4
4
3
3
2
2
1
1
Re v e n u e
Re v e n u e
1 -9 7
Q u a rte rs
0
-1
0
-1
-2
-2
-3
-3
-4
-4
-5
-5
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
Q u a rte rs
Applied Regression -- Prof. Juran
1-01
1-02
1-91
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
1-02
Q u a rte rs
39
F o r N ik e, th e W in ters m o d el is b etter fo r rev en u e, an d th e regressio n m o d el is
b est fo r earn in g s.
N ik e R e ve n u e (R e g re s s io n )
3500
3000
3000
2500
2500
Re v e n u e
Re v e n u e
N ik e R e ve n u e (W in te rs )
3500
2000
1500
2000
1500
1000
1000
500
500
0
0
1 -9 2
1 -9 3
1 -9 4
1 -9 5
1 -9 6
1 -9 7
1 -9 8
1 -9 9
1 -0 0
1 -0 1
1 -0 2
1 -0 3
1 -9 2
1 -9 3
1 -9 4
1 -9 5
1 -9 6
1 -9 7
Q u a rte rs
N ik e E P S (W in te rs )
1 -9 9
1 -0 0
1 -0 1
1 -0 2
1 -0 3
1-99
1-00
1-01
1-02
1-03
N ik e E P S (R e g re s s io n )
1.4
1.4
1.2
1.2
1
1
0.8
0.8
Re v e n u e
Re v e n u e
1 -9 8
Q u a rte rs
0.6
0.4
0.6
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
1-92
1-93
1-94
1-95
1-96
1-97
1-98
1-99
1-00
1-01
Q u a rte rs
Applied Regression -- Prof. Juran
1-02
1-03
1-92
1-93
1-94
1-95
1-96
1-97
1-98
Q u a rte rs
40
Time series characterized by relatively consistent trends
and seasonality favor the regression model.
If the trend and seasonality are not stable over time, then
Winters’ method does a better job of responding to their
changing patterns.
Applied Regression -- Prof. Juran
41
Lagged Variables
• Only applicable in a causal model
• Effects of independent variables might
not be felt immediately
• Used for advertising’s effect on sales
Applied Regression -- Prof. Juran
42
Example: Motel Chain
B
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
C
S a le s
1200
880
1800
1050
1700
350
2500
760
2300
1000
1570
2430
1320
1400
1890
3200
2200
1440
4000
4100
Applied Regression -- Prof. Juran
D
Q u a rte r
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
E
F
A d v A d v -L a g 1
30 *
20
30
15
20
40
15
10
40
50
10
5
50
40
5
20
40
10
20
60
10
5
60
35
5
15
35
70
15
25
70
30
25
60
30
80
60
50
80
G
H
I
Q tr_ 1
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
Q tr_ 2
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
Q tr_ 3
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
43
A
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
B
S u m m ary m easu res
M ultiple R
R -S quare
A dj R -S quare
S tE rr of E st
C
D
E
F
G
0.9856
0.9714
0.9571
213.2
A N O V A T ab le
df
SS
MS
6 18515047.32 3085841.22
12
545531.63
45460.97
R egression
R esidual
F
67.88
p-v alue
0.0000
R eg ressio n co efficien ts
C onstant
Q uarter
A dv ertising
A dv ertising_Lag1
Q tr_1
Q tr_2
Q tr_3
C oefficient
98.36
41.58
4.53
34.03
280.62
-491.59
532.60
Applied Regression -- Prof. Juran
S td E rr
174.96
13.56
3.25
3.13
157.66
145.37
143.04
t-v alue p-v alue Low er lim it U pper lim it
0.5622 0.5843
-282.9
479.6
3.0672 0.0098
12.0
71.1
1.3959 0.1880
-2.5
11.6
10.8759 0.0000
27.2
40.9
1.7799 0.1004
-62.9
624.1
-3.3817 0.0055
-808.3
-174.9
3.7235 0.0029
221.0
844.2
44
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A
B
Q tr
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
S a le s
1200
880
1800
1050
1700
350
2500
760
2300
1000
1570
2430
1320
1400
1890
3200
2200
1440
4000
4100
C
D
Q u a rte r
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
E
F
A d v A d v -L a g 1
30 *
20
30
15
20
40
15
10
40
50
10
5
50
40
5
20
40
10
20
60
10
5
60
35
5
15
35
70
15
25
70
30
25
60
30
80
60
50
80
50
50
50
50
50
50
50
50
G
H
I
Q tr_ 1
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
Q tr_ 2
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
Q tr_ 3
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
Applied Regression -- Prof. Juran
J
K
L
F o re ca st
C o e fficie n t
802
1504
957
1994
423
2646
783
2205
749
1701
2662
1248
1448
2083
3259
2073
1648
3826
3879
3181
2450
3516
3025
M
N
O
P
C o n sta n t Q u a rte r A d v A d v -L a g 1
9 8 .3 6
4 1 .5 8 4 .5 3
3 4 .0 3
Q
R
S
Q tr_ 1
2 8 0 .6 2
Q tr_ 2
-4 9 1 .5 9
Q tr_ 3
5 3 2 .6 0
= $ M $ 2 + S U M P R O D U C T ($ N $ 2 :$ S $ 2 ,D 5 :I5 )
45
S im p le E xp o n e n tia l S m o o th in g F o re c a s t
6 ,0 0 0
5 ,2 5 0
O b s e rv a tio n
F o re c a s t
4 ,5 0 0
3 ,7 5 0
3 ,0 0 0
2 ,2 5 0
1 ,5 0 0
750
-
1
2
3
4
5
6
7
8
9
Applied Regression -- Prof. Juran
10
11
12
13
Q u a rte r
14
15
16
17
18
19
20
21
22
46
23
24
H o lt's F o re c a s t
6 ,0 0 0
5 ,2 5 0
O b s e rv a tio n
F o re c a s t
4 ,5 0 0
3 ,7 5 0
3 ,0 0 0
2 ,2 5 0
1 ,5 0 0
750
-
1
2
3
4
5
6
7
8
9
Applied Regression -- Prof. Juran
10
11
12
13
Q u a rte r
14
15
16
17
18
19
20
21
22
47
23
24
W in te rs ' F o re c a s t
6 ,0 0 0
5 ,2 5 0
O b s e rv a tio n
F o re c a s t
4 ,5 0 0
3 ,7 5 0
3 ,0 0 0
2 ,2 5 0
1 ,5 0 0
750
-
1
2
3
4
5
6
7
8
9
Applied Regression -- Prof. Juran
10
11
12
13
Q u a rte r
14
15
16
17
18
19
20
21
22
48
23
24
M u ltip le R e g re s s io n F o re c a s t (w ith L a g g e d A d v e rtis in g )
6 ,0 0 0
5 ,2 5 0
O b s e rv a tio n
F o re c a s t
4 ,5 0 0
3 ,7 5 0
3 ,0 0 0
2 ,2 5 0
1 ,5 0 0
750
-
1
2
3
4
5
6
7
8
9
Applied Regression -- Prof. Juran
10
11
12
13
Q u a rte r
14
15
16
17
18
19
20
21
22
49
23
24
Here are measures of model fit for the non-regression models:
S im ple
H olt's
W inters'
MAE
769.6
766.8
708.0
RMSE
939.9
866.6
845.6
MAPE
50.5%
36.7%
47.3%
The regression model has a standard error of only 213,
which is much better than any of the other models.
Applied Regression -- Prof. Juran
50
Forecasting with Minitab
Applied Regression -- Prof. Juran
51
Applied Regression -- Prof. Juran
52
P ay R ate/Job G rade
1500
1300
M en
P a y R ate
W o m en
1100
900
700
500
0
1
2
3
4
5
6
7
8
9
Job G rad e
Applied Regression -- Prof. Juran
53
D istrib u tio n o f P a y R a te / J o b G ra d e / S ex
$1,800
$1,600
$1,400
W o m en
M en
P a y R ate
$1,200
$1,000
$800
$600
$400
$200
$0
1
2
3
4
5
6
7
8
9
Job G rad e
Applied Regression -- Prof. Juran
54
R e g re ssio n S ta tistics
M u ltip le R
R S q u a re
A d ju ste d R S q u a re
S ta n d a rd E rro r
O b se rv a tio n s
0 .9 0 7 2
0 .8 2 3 1
0 .8 2 1 0
9 7 .0 6 0 1
256
ANOVA
R e g re ssio n
R e sid u a l
T o ta l
df
3
252
255
In te rce p t
GRADE
SEX
T in G R A D E
C o e fficie n ts
5 2 6 .7 3 1 2
7 5 .0 3 2 3
5 9 .6 2 2 0
3 0 .8 2 1 5
Applied Regression -- Prof. Juran
SS
MS
1 1 0 4 5 3 5 8 .5 3 6 7 3 6 8 1 7 8 6 .1 7 8 9
2 3 7 4 0 0 8 .5 1 2 9
9 4 2 0 .6 6 8 7
1 3 4 1 9 3 6 7 .0 4 9 6
S ta n d a rd E rro r
1 4 .1 4 2 2
3 .3 2 6 1
1 5 .9 8 8 7
4 .5 6 8 9
t S ta t
3 7 .2 4 5 4
2 2 .5 5 8 6
3 .7 2 9 0
6 .7 4 6 0
F
3 9 0 .8 2 0 0
S ig n ifica n ce F
0 .0 0 0 0
P -va lu e
0 .0 0 0 0
0 .0 0 0 0
0 .0 0 0 2
0 .0 0 0 0
55
A rts y: A n a lys is o f P a y R a te s b y G ra d e
G rade
SEX
1
2
3
4
5
6
7
8
T otal
664.71
725.50
830.02
833.61
886.92
1006.15
1093.36
1273.86
832.76
81.44
56.27
57.12
87.64
67.58
99.77
122.87
128.89
158.53
C ount
22
51
22
18
24
15
17
2
171
M ean
804.00
835.33
824.05
918.52
1130.76
1212.91
1375.92
1128.17
36.77
87.85
161.55
114.01
133.45
103.52
223.40
M ean
F em ale
S td D ev
M ale
-->
S td D ev
T otal
C ount
1
0
9
5
11
10
33
16
85
M ean
670.77
725.50
831.56
831.53
896.85
1055.99
1172.26
1364.58
930.85
84.71
56.27
51.48
85.76
104.82
120.68
140.83
107.34
229.40
C ount
23
51
31
23
35
25
50
18
256
M ean
139.29
5.31
-9.55
31.60
124.61
119.54
102.06
295.40
81.44
52.30
87.68
105.32
105.57
130.02
105.28
182.55
S td D ev
D ifference
s P ooled
C onfidence
Low er
307.23
46.86
80.10
108.40
211.86
197.07
267.95
343.02
Lim its
U pper
-28.65
-36.23
-99.21
-45.19
37.36
42.02
-63.83
247.78
t S tat
1.673
0.257
-0.216
0.824
2.891
3.080
1.293
12.193
p V alue
0.055
0.400
0.584
0.208
0.004
0.002
0.107
0.000
T abelled t
2.017
2.008
2.023
2.002
2.024
1.997
2.101
1.966
1.984
W eighted A v erage F em ale S hortfall
Applied Regression -- Prof. Juran
$
65.61
56
Summary
Forecasting Methods
• Exponential Smoothing
– Simple
– Trend (Holt’s Method)
– Seasonality (Winters’ Method)
• Regression
– Trend
– Seasonality
– Lagged Variables
Applied Regression -- Prof. Juran
57
For Session 9 and 10
• Cars (B)
• Steam
Applied Regression -- Prof. Juran
58
Download
Related flashcards

Pediatrics

67 cards

Pediatrics

86 cards

Pediatricians

20 cards

Breastfeeding

11 cards

Create Flashcards