Lecture_10_Transforming the data

advertisement
Transforming the data
Modified from:
Gotelli and Allison 2004. Chapter 8;
Sokal and Rohlf 2000 Chapter 13
What is a transformation?
It is a mathematical function that is applied
to all the observations of a given
variable
*
Y  f Y 
• Y represents the original variable, Y* is the transformed variable,
and f is a mathematical function that is applied to the data
Most are monotonic:
• Monotonic functions do not
change the rank order of the
data, but they do change their
relative spacing, and therefore
affect the variance and shape of
the probability distribution
There are two legitimate reasons to
transform your data before analysis
• The patterns in the transformed data may
be easier to understand and communicate
than patterns in the raw data.
• They may be necessary so that the
analysis is valid
They are often useful for converting
curves into straight lines:
The logarithmic function is very useful when
two variables are related to each other by
multiplicative or exponential functions
Y  0  1 log(X )
Logarithmic (X):
20
20
15
10
10
Y
Y
y = Ln(x)
15
5
5
0
0
0
50000
100000 150000 200000
1
100
10000
x
log(x)
Y  0  1 log(X )
1000000
Example:
Asi’s growth (50 % each year)
Year
weight
1
10.0
2
15.0
3
22.5
4
33.8
5
50.6
6
75.9
7
113.9
8
170.9
9
256.3
10
384.4
11
576.7
12
865.0
Y  0e
Exponential:
1 X
10000.0
1000.0
0.4055x
y = 6.6667e
weight (g)
weight (g)
800.0
600.0
400.0
100.0
200.0
1.0
0.0
0
5
10
year
15
0
5
10
year
ln(Y )  ln(0 )  1 X
15
Example: Species richness in the
Galapagos Islands
Y  0 X
Power:
1
1000
400
Richness
Richness
300
200
Nspecies
100
100
Nspecies
10
Power
(Nspecies)
Power
(Nspecies)
1
0
0
2000
4000
6000
8000
0.1
10
1000
Area
Area
log(Y )  log(0 )  1 log( X )
100000
Statistics and transformation
Data to be analyzed using analysis of
variance must meet to assumptions:
• The data must be homoscedastic:
variances of treatment groups need to be
approximately equal
• The residuals, or deviations from the
mean must be normal random variables
Lets look an example
• A single variate of the simplest type of ANOVA
(completely randomized, single classification)
decomposes as follows:
Yij    i   ij
• In this model the components are additive with
the error term εij distributed normally
However…
• We might encounter a situation in which the
components are multiplicative in effect, where
Yij   i   ij
• If we fitted a standard ANOVA model, the
observed deviations from the group means
would lack normality and homoscedasticity
The logarithmic transformation
• We can correct this situation by
transforming our model into logarithms
Y *  log(Y )
Wherever the mean is
positively correlated with
the variance the logarithmic
transformation is likely to
remedy the situation and
make the variance
independent of the mean
We would obtain
log(Yij )  log( )  log(i )  log( ij )
• Which is additive and homoscedastic
The square root transformation
• It is used most frequently with count data.
Such distributions are likely to be Poisson distributed
rather than normally distributed.
In the Poisson distribution the variance is the same as
the mean.
Transforming the variates to square roots generally
makes the variances independents of the means for
these type of data.
When counts include zero values, it is desirable to code
all variates by adding 0.5.
The box-cox transformation
• Often one do not have a-priori reason for
selecting a specific transformation.
• Box and Cox (1964) developed a
procedure for estimating the best
transformation to normality within the family
of power transformation
Y *  (Y  1) /  for(  0)
for(  0)
Y *  log(Y )

The box-cox transformation
• The value of lambda which maximizes the loglikelihood function:
v
v
2
L   ln sT  (  1)  ln(Y )
2
n
yields the best transformation to normality within the
family of transformations
s2T is the variance of the transformed values (based on v
degrees of freedom). The second term involves the sum of
the ln of untransformed values
box-cox in R (for a vector of data Y)
-23.0
-23.5
-24.0
log-Likelihood
What do you
conclude from this
plot?
-22.5
>library(MASS)
>lamb <- seq(0,2.5,0.5)
>boxcox(Y_~1,lamb,plotit=T)
>library(car)
>transform_Y<-box.cox(Y,lamb)
95%
-2
Read more in Sokal and Rohlf 2000 page 417
-1
0
1
2
The arcsine transformation
• Also known as the angular transformation
• It is especially appropriate to percentages
The arcsine transformation
1
Y *  arcsin Y
Transformed data
0.8
0.6
lineal
arcsine
0.4
It is appropriate only for
data expressed as
proportions
0.2
0
0
0.2
0.4
0.6
0.8
Proportion original data
1
Since the transformations
discussed are NON-LINEAR,
confidence limits computed in
the transformed scale and
changed back to the original
scale would be
asymmetrical
Download