Lec_15_GAMtrees

advertisement
Lectures 15,16 – Additive Models,
Trees, and Related Methods
Rice ECE697
Farinaz Koushanfar
Fall 2006
Summary
•
•
•
•
Generalized Additive Models
Tree-Based Methods
PRIM – Bump Hunting
Mutlivariate Adaptive Regression Splines
(MARS)
• Missing Data
Additive Models
• In real life, effects are nonlinear
•
Note: Some slides are borrowed from Tibshirani
Examples
The Price for Additivity
Data from a study of
Diabetic children,
Predicting log C-peptide
(a blood measurement)
Generalized Additive Models (GAM)
Two-class Logistic Regression
Other Examples
Fitting Additive Models
p
Y     f (X )  
j
j1
The mean of error term is zero!
j
• Given observations xi,yi, a criterion like the
penalized sum of squares can be specified for
this problem, where ’s are tuning parameters
PRSS(, f ,..., f ) 
1
p
p
N
p
 {y     f ( x )}     f " ( t ) dt
i 1
i
j1
2
j
ij
j1
2
j
j
j
j
Fitting Additive Models
The Backfitting Algorithm for Additive
Models
1
• Initialize:    y ; fˆ  0, i, j
N
N
i 1
i
j
• Cycle: j=1,2,…,p,1,2,…,p,1,…
fˆ  S [{y     fˆ ( x )} ]
N
j
j
i
k j
k
ik
1
ˆf  fˆ  1  fˆ ( x )
N
N
j
j
i 1
j
ij
• Until the functions fj change less than a
prespecified threshold
Fitting Additive Models (Cont’d)
Example: Penalized Least square
Example: Fitting GAM for Logistic
Regression (Newton-Raphson Algorithm)
Example: Predicting Email Spam
• Data from 4601 mail messages, spam=1, email=0,
filter trained for each user separately
• Goal: predict whether an email is spam (junk mail) or
good
• Input features: relative frequencies in a message of 57
of the commonly occurring words and punctuation
marks in all training set
• Not all errors are equal; we want to avoid filtering out
good email, while letting spam get through is not
desirable but less serious in its consequences
Predictors
Details
Some Important Features
Results
• Test data confusion matrix for the additive
logistic regression model fit to the spam
training data
• The overall test error rate is 5.3%
Summary of Additive Logistic Fit
• Significant predictors from the additive model fit to the spam
training data. The coefficients represent the linear part of f^j,
along with their standard errors and Z-score.
• The nonlinear p-value represents a test of nonlinearity of f^j
Example: Plots for Spam Analysis
Figure 9.1. Spam analysis:
estimated functions for
significant predictors. The
rug plot along the bottom
of each frame indicates the
observed values of the
corresponding predictor.
For many predictors, the
nonlinearity picks up the
discontinuity at zero.
In Summary
• Additive models are a useful extension to
linear models, making them more flexible
• The backfitting procedure is simple and
modular
• Limitations for large data mining applications
• Backfitting fits all predictors, which is not
desirable when a large number are available
Tree-Based Methods
Node Impurity Measures
Results for Spam Example
Pruned tree for the Spam Example
Classification Rules Fit to the Spam
Data
PRIM-Bump Hunting
Number of Observations in a Box
Basis Functions
MARS Forward Modeling Procedure
Multiplication of Basis Functions
MARS on Spam Example
Download