Pages

Sunday, May 14, 2017

An gentle introduction to Generalized Linear Models in R

What are generalized linear models?

Generalized linear models (glm) are a special form of linear models used when errors do not follow a normal distribution. In previous posts I’ve discussed linear models (lm), their use and interpretation.

To recap, lm’s model a response variable which depends on one or more independent variables
y~x
  Regular linear models have several assumptions, a really important one is the  normal distribution of errors.  Errors are the differences between the observed and predicted values of the response variable.

Let’s use an example. Let’s say you are modeling the effect of seedling density on seedling herbivory, both measured as continuous variables.
herbivory~density
  If you assume a linear relationship between both variables, a linear model will produce a linear equation that allows us to predict how much herbivory a plant will have based on its density. Generally such equations are presented in the form:

\[ y \sim \alpha + \beta_1 x \]

With \(\alpha\) as the intercept and \(\beta_1\) as the regression coefficient, which depicts the linear effect of density on herbivory. If \(\beta_1 > 0\) it means that an increase in density will have a positive effect (i.e., increase) herbivory. This relationship is usually displayed as a regression line relating the dependent and independent variables.



  The errors are the differences (dotted line) between the observed values (dots on previous figure) and the regression line. When both variables are normally distributed and linearly related, the distribution of the errors should follow a normal distribution with zero mean.

  For certain types of data, like counts or proportions, their distributions are limited by certain restrictions. Counts, for example, cannot ever be negative. Proportions are bound between 0 and 1. In most of these cases, the relationship between dependent and independent variables is no longer linear and is usually described by non-linear equations. This means that the error distribution along the fitted line is no longer normal either. In these cases a generalized linear model should be preferred over linear models.

  As an example, let us consider a dichotomous response variable such as survival of a seedling in the forest one year after germination. The seedling may survive (success) or it may not (failure). We are interested in determining if herbivory affects the fate of seedlings. Our goal is to fit a model that describes the effect of herbivory on seedling survival. Survival is a probability and as such, its range is restricted to \( y \in [0,\ldots,1]\).  In contrast, a regular linear model would fit a line that predicted values in the range \( \hat{y} \in [-\infty; \infty]\). This is clearly wrong. There aren’t any negative probabilities or probabilities that are higher than one. This is one of the reasons why a linear model is not advisable in this scenario. We would need to fit a model that limits the response variable between 0 and 1. A sigmoid relationship, similar to the one shown in the following figure, may be used to model the effect of herbivory on survival.

  There are two approaches for fitting a sigmoidal curve. One is to fit a non-linear model to the data an try to estimate the exponential or sigmoidal equation that best describes the effect of herbivory on seedling survival. Non-linear models require very precise data, previous knowledge of the relationship between variables and estimate several parameters.  The second option is to use a linear model that fits our probability data. The second options is preferred since we have robust methods of fitting linear models.

Generalized linear models (glm) allow us to fit linear models to data that do not meet the criteria for linear regression.  Glm's fit predictors that describe the relationship between the dependent and the response variable taking into account the restrictions imposed by  the data. In our example, they predict expected values that lie between 0 and 1.

 Predictions are performed through a predictor function.   These predictor functions \(\eta(x)\) are not usually in linear form  \( y = \alpha + \beta_i x + \varepsilon_{ij} \) and therefore need to be linearized by a link function. In order words, the link function \( g(\eta) \) linearizes the non-linear predictor function \( \eta(x) \), allowing us to use robust methods. Since there are many predictor and link functions depending on the nature of the data, these models are referred to in plural as generalized linear models.

  The previous explanation will become more clear with an example. Previously, we wanted to determine the effect of herbivory on the probability of seedlings surviving one year in the forest. The following equation models a sigmoidal curve that explains the effect of a linear independent variable \(x\) on the probability of survival \(p\).

\[ \eta(x) = p =\frac{e^{\alpha+\beta x}}{1+e^{\alpha + \beta x}} \]

  We are interested in determining the effect of herbivory (\( \beta \)) on the probability of survival using a linear model.  To this end we use the log of the odds ratio or logits as a link functions. A logit is the ratio between the probability of survival \(p\) and the probability of not surviving \( (1-p) \). Therefore, the predictor equation may be transformed into a linear model as:

\[ g(\eta) = \ln\left({\frac{p}{1-p}}\right) = \alpha +\beta x \]

Now we may fit a regular linear model to estimate the parameter \(\beta\) which describes the effect of herbivory on survival probability. The only precaution we need to take is that the response variable is transformed into logits, therefore we need to transform them back into probability units or odd ratios to interpret them.

There are different predictor and link functions depending on the type of response variable. Counts usually use a log as their link function and probabilities or dichotomous variables use logits or probits as their link function. I will not go into detail into why or which link function should be used in this post. In the next section I’ll show how to perform and interpret a glm in R.

GLMs in R

In R, generalized linear models are performed using the glm() command. It is similar to the lm() command as it requires a formula that describes the relationship between the dependent and the independent variables. However,  glm()  requires that we define an error distribution family. The family defines the distribution of the errors and chooses the appropriate link function. In R the two most commonly used families are Poisson and Binomial. Poisson is commonly used for counts, while the Binomial family is used in proportions or dichotomous response variables.

  For our first example we will use a simple model that studies the effect of herbivory, measured  as leaf area ( \(\textrm{mm}^2\) ) removed, and the fate of  seedlings of an endemic Costa Rican oak Quercus costaricensis. One year after germination seedlings were recorded as survived (1) or died (0). We create a subsample of the data set in R as follows:

> oak <- data.frame(survival = c(1,1,1,0,1,0,0,0,0,1,1, 1, 1,0,0,0,0,1,1),
area= c(5.41,5.63,25.92,15.17,13.04,18.85,33.95,22.87,12.01,11.6,6.09,2.28,4.05,59.94,63.16,22.76,23.54,0.21,2.55))
> plot(survival~area, data=oak)

  The figure shows a likely decrease in survival with an increase in leaf area removed. We now need to test the effect using a glm. We fit a simple model of the effect of herbivory on survival using the binomial distribution which uses a logit link function. As with linear models, the result of the glm must be assigned to an object for downstream analysis.
> m1 <- glm(survival~area, data=oak, family=binomial)

The results are displayed after asking for a summary:
> summary(m1)

Call:
glm(formula = survival ~ area, family = binomial, data = oak)

Deviance Residuals:
    Min       1Q   Median       3Q      Max
-1.5413  -0.5731   0.2427   0.4369   2.2065

Coefficients:
            Estimate Std. Error z value Pr(>|z|)
(Intercept)   3.5582     1.6145   2.204   0.0275 *
area         -0.2277     0.1009  -2.257   0.0240 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)
Null deviance: 26.287  on 18  degrees of freedom
Residual deviance: 13.113  on 17  degrees of freedom
AIC: 17.113
Number of Fisher Scoring iterations: 6 


  As in previous posts, we will analyze the results in detail. The first part shows the model tested and the distribution of deviance residuals. Since we are fitting a model, the differences between the observed values and those predicted by the model are de residuals. A summary of their distribution is presented as minimum and maximum residuals, the first and third quartiles and the median.

I normally just compare the Min-Max and 1Q-3Q to see if they are comparable in absolute size. If not, I worry about skewed data and overdispersion (more on that later).

  R then presents us with the estimates for the linear model coefficients. In our case we fit a simple linear model in the form of \( y \sim \alpha + \beta_0 x + \varepsilon_{ij} \), therefore there are only two parameters to be estimated, the intercept (\(\alpha\)) and the effect of area on survival (\(\beta\)). In this case the intercept is of little interest to us. But we do see a significant negative effect of area  on survival ( \(\beta = -0.2277, p=0.0240\)).

  The summary then tells us that the variance was estimated using the binomial family without overdispersion. The Null deviance depicts the residual variance ofa null model (i.e., just the intercept), while the residual deviance is the variance of the residuals when the covariate area is included in the model. This is why we have one less degree of freedom in the residual deviance, because we are including a new term in the model. The summary ends with the number of iterations needed for convergence (I never check this).

  Before we interpret the results, it is advisable to test some predictions about the fit of the model. First of all we need to check for overdispersion. In linear models (lm) the variance of the residuals is estimated based on the data. However, in glm’s the variance of the residuals is specified by the distribution used and is usually described as a particular relationships between the mean and the variance. For example, if we would’ve chosen a Poisson distribution, the mean and the variance should be the same. A similar function exists for the Binomial distribution. If the data shows a greater variance than expected by the distribution, we have overdispersion. When over dispersion occurs the fitted model  and the estimated parameters are biased and should not be used. 

Overdispersion usually means that a covariate which has an important effect on the response variable was omitted. For a logistic model the deviance residuals should follow a chi-square distribution with \( (n-p) \) degrees of freedom where \(p\) is the number of parameters estimated. Based on the chi-square distribution the deviance and its degrees of freedom should be comparable. Thus, we can check the null hypothesis of no over-dispersion by:
> 1-pchisq(m1$deviance,m1$df.residual)
[1] 0.7285575
  Since we do not see any evidence of over dispersion ( \(p>0.05\) ), we may asume the model is appropriate and test the significance of the model through a deviance test:
> anova(m1, test="Chisq")
Analysis of Deviance Table

Model: binomial, link: logit

Response: survival

Terms added sequentially (first to last)

               Df Deviance Resid. Df Resid. Dev  Pr(>Chi)
NULL                             18     26.287
area           1   13.174        17     13.114 0.0002839 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
  The results show that a model that includes area is significantly different than a model with just the intercept (NULL). Therefore we can conclude that there is a significant effect of area on survival.

Now, let’s get back to the interpretation of the coefficients. As we saw before, the model predicts two coefficients, the intercept and the slope of the regression.
> coef(m1)
(Intercept) area
3.5581582 -0.2276513
This shows that the effect of area is negative, however we cannot interprete these values directly. As shown in the beginning of the post, our variables were transformed to logits. Logits are the log of the odds-ratios \( \tfrac{p}{q} \) which are the odds of an event happening (e.g., seedling survives) over the odds of an event not happening (seedling dies). An odds-ratio (OR) of 1 means that seedling survival is equally likely than a seedling dying. If OR<1 means that seedlings are more likely to die than to survive, while survival is more likely than death if OR>1. So in order to interpret the coefficients from the glm, we need to convert logits into OR by exponentiating them.
> exp(coef(m1))
(Intercept)    area
35.0984945     0.7964019
  As previously mentinoed were are not too interested in the intercept. The OR for area are less than 1 suggesting that an increase of 1 \(\textrm{mm}^2\) lowers the survival probability by (\(1-0.7964=0.2036 \) ) approximately 20%.

  If we want 95% confidence intervals for the coefficients we can ask for them with the confint() function but they also need to be exponentiated in order to interpret the OR's.
> exp(confint(m1))
Waiting for profiling to be done...
                  2.5 %      97.5 % (Intercept) 2.9039600 2466.3851121 area          0.6092671    0.9289333
Disregard the warning which is probably a consequence of our small data set. The 95% confidence intervals for area do not include the value OR=1 which confirms the significant negative effect of area on survival. We can conclude that an increase of one unit of leaf area removed by herbivores decreases the probability of survival. The decrease in survival is found with 95% certainty between 8% and 40% .

R-square

There is no easy way of estimating \(R^2\) in glm’s. The best approximation is a ratio of the null and residual deviances. As previously mentioned the former shows the prediction of a model that only includes the intercept and the latter includes the covariates.
> 1 - (m1$dev/ m1$null)
[1] 0.5011413
We can interpret this value to say that approximately 50% of the variance in survival is explained by the variance in herbivory.

Plot

A plot may be constructed in two ways;  using the base graph commands in R or using ggplot2. Although ggplot is a handy tool, I always prefer the base graphs since they look better for publication. In order to plot the observed and the expected curve, we need to calculated predicted values as we did in the post where we calculated confidence intervals for regression lines.

> xv <- seq(0,70, length=1000)
> su.pre <- predict(m1, list(area=xv), type="response", se=T)
> plot(survival~area, data=oak, xlim=c(0,80), ylab="Seedling survival", xlab="Herbivory")
> lines(su.pre$fit~xv)
> lines(su.pre$fit+su.pre$se.fit ~ xv, lty=2)
> lines(su.pre$fit-su.pre$se.fit ~xv, lty=2)
In the previous commands we do the following:
  1. Create 1000 values between 0 and 70 to be used as \(x\) values for our regression model.
  2. Estimate \(\hat{y}\) values for each \(x\) value created in the previous step and standard errors for the estimates (option se=T).
  3. Plot the observed values with appropriate axis labels
  4. Add lines for the expected \(\hat{y}\) values and their confidence intervals (su.pre$fit) .
The resulting graphs shows an expected linear decrease in seedling survival with increasing herbivory.

Sunday, August 28, 2011

Comparing two regression slopes by means of an ANCOVA

Regressions are commonly used in biology to determine the causal relationship between two variables. This analysis is most commonly used in morphological studies, where the allometric relationship between two morphological variables is of fundamental interest. Comparing scaling parameters (i.e. slopes) between groups can be used by biologist to assess different growth patterns or the development of different forms or shapes between groups. For examples, the regression between head size and body size may be different between males and females if they grow differently. This difference in allometric growth should manifest itself as a different slope in both regression lines.

Ancova

The analysis of covariance (ANCOVA) is used to compare two or more regression lines by testing the effect of a categorical factor on a dependent variable (y-var) while controlling for the effect of a continuous co-variable (x-var). When we want to compare two or more regression lines, the categorical factor splits the relationship between x-var and y-var into several linear equations, one for each level of the categorical factor.
Regression lines are compared by studying the interaction of the categorical variable (i.e. treatment effect) with the continuous independent variable (x-var). If the interaction is significantly different from zero it means that the effect of the continuous covariate on the response depends on the level of the categorical factor. In other words, the regression lines have different slopes (Right graph on the figure below). A significant treatment effect with no significant interaction shows that the covariate has the same effect for all levels of the categorical factor. However, since the treatment effect is important, the regression lines although parallel have different intercepts. Finally, if the treatment effect is not significant nor its interaction with the covariate (but the coariate is significant), this means there is a single regression line. A reaction norm is used to graphically represent the possible outcomes of an ANCOVA.
ancova-fig-2
For this example we want to determine how body size (snout-vent length) relates to pelvic canal width in both male and female alligators (data from Handbook of Biological Statistics). In this specific case, sex is a categorical factor with two levels (i.e. male and female) while snout-vent length is the regressor (x-var) and pelvic canal width is the response variable (y-var). The ANCOVA will be used to assess if the regression between body size and pelvic width are the comparable between the sexes.

Interpretation

An ANCOVA is able to test for differences in slopes and intercepts among regression lines. Both concepts have different biological interpretations. Differences in intercepts are interpreted as differences in magnitude but not in the rate of change. If we are measuring sizes and regression lines have the same slope but cross the y-axis at different values, lines should be parallel. This means that growth is similar for both lines but one group is simply larger than the other. A difference in slopes is interpreted as differences in the rate of change. In allometric studies, this means that there is a significant change in growth rates among groups.
Slopes should be tested first, by testing for the interaction between the covariate and the factor. If slopes are significantly different between groups, then testing for different intercepts is somewhat inconsequential since it is very likely that the intercepts differ too (unless they both go through zero). Additionally, if the interaction is significant testing for natural effects is meaningless (see The Infamous Type III SS). If the interaction between the covariate and the factor is not significantly different from zero, then we can assume the slopes are similar between equations. In this case, we may proceed to test for differences in intercept values among regression lines.

Performing an ANCOVA

For an ANCOVA our data should have a format very similar to that needed for an Analysis of Variance. We need a categorical factor with two or more levels (i.e. sex factor has two levels: male and female) and at least one independent variable and one dependent or response variable (y-var).
> head(gator)
sex snout pelvic
1 male  1.10   7.62
2 male  1.19   8.20
3 male  1.13   8.00
4 male  1.15   9.60
5 male  0.96   6.50
6 male  1.19   8.17
The preceding code shows the first six lines of the gator object which includes three variables: sex, snout and pelvic, which hold the sex, snout-vent size and the pelvic canal width of alligators, respectively. The sex variable is a factor with two levels, while the other two variables are numeric in their type.
We can do an ANCOVA both with the lm() and aov() commands. For this tutorial, we will use the aov() command due to its simplicity.
> mod1 <- aov(pelvic~snout*sex, data=gator)

> summary(mod1)             Df Sum Sq Mean Sq  F value    Pr(>F)
snout        1 51.871  51.871 134.5392 8.278e-13 ***
sex          1  2.016   2.016   5.2284   0.02921 *
snout:sex    1  0.005   0.005   0.0129   0.91013
Residuals   31 11.952   0.386                   
---


Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 
The previous code shows the ANCOVA model, pelvic is modeled as the dependent variable with sex as the factor and snout as the covariate. The summary of the results show a significant effect of snout and sex, but no significant interaction. These results suggest that the slope of the regression between snout-vent length and pelvic width is similar for both males and females.
A second more parsimonious model should be fit without the interaction to test for a significant differences in the slope.
> mod2 <- aov(pelvic~snout+sex, data=gator)
> summary(mod2)
        Df Sum Sq Mean Sq  F value    Pr(>F)
snout        1 51.871  51.871 138.8212 3.547e-13 ***
sex          1  2.016   2.016   5.3948   0.02671 *
Residuals   32 11.957   0.374                   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 
The second model shows that sex has a significant effect on the dependent variable which in this case can be interpreted as a significant difference in ‘intercepts’ between the regression lines of males and females. We can compare mod1 and mod2 with the anova() command to assess if removing the interaction significantly affects the fit of the model:
> anova(mod1,mod2)
Analysis of Variance Table

Model 1: pelvic ~ snout * sex
Model 2: pelvic ~ snout + sex
Res.Df    RSS Df  Sum of Sq      F Pr(>F)
1     31 11.952                        
2     32 11.957 -1 -0.0049928 0.0129 0.9101
The anova() command clearly shows that removing the interaction does not significantly affect the fit of the model (F=0.0129, p=0.91). Therefore, we may conclude that the most parsimonious model is mod2. Biologically we observe that for alligators, body size has a significant and positive effect on pelvic width and the effect is similar for males and females. However, we still don’t know how the slopes change.
At this point we are going to fit linear regressions separately for males and females. In most cases, this should have been performed before the ANCOVA. However, in this example we first tested for differences in the regression lines and once we were certain of the significant effects we proceeded to fit regression lines.
To accomplish this, we are now going to sub-set the data matrix into two sets, one for males and another for females. We can do this with the subset() command or using the extract functions []. We will use both in the following code for didactic purposes:
> machos <- subset(gator, sex=="male")
> hembras <- gator[gator$sex=='female',]
Separate regression lines can also be fitted using the subset option within the lm() command, however we will use separate data frames to simplify the creation of graphs:
> reg1 <- lm(pelvic~snout, data=machos); summary(reg1)
Call:
lm(formula = pelvic ~ snout, data = machos)

Residuals:
 Min       1Q   Median       3Q      Max
-0.85665 -0.40653 -0.08933  0.04518  1.57408

Coefficients:
        Estimate Std. Error t value Pr(>|t|)
(Intercept)   0.4527     0.9697   0.467    0.647
snout         6.5854     0.8625   7.636 6.85e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.7085 on 17 degrees of freedom
Multiple R-squared: 0.7742, Adjusted R-squared: 0.761
F-statistic:  58.3 on 1 and 17 DF,  p-value: 6.846e-07

> reg2 <- lm(pelvic~snout, data=hembras); summary(reg2)
Call:
lm(formula = pelvic ~ snout, data = hembras)

Residuals:
 Min       1Q   Median       3Q      Max
-0.69961 -0.19364 -0.07634  0.04907  1.15098

Coefficients:
        Estimate Std. Error t value Pr(>|t|)
(Intercept)  -0.2199     0.9689  -0.227    0.824
snout         6.7471     0.9574   7.047  5.8e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.4941 on 14 degrees of freedom
Multiple R-squared: 0.7801, Adjusted R-squared: 0.7644
F-statistic: 49.67 on 1 and 14 DF,  p-value: 5.797e-06 
The regression lines indicate that males have a higher intercept (a=0.45) than females (a=-0.2199), which means that males are larger. We can now plot both regression lines as follows:
> plot(pelvic~snout, data=gator, type='n')
> points(machos$snout,machos$pelvic, pch=20)
> points(hembras$snout,hembras$pelvic, pch=1)
> abline(reg1, lty=1)
> abline(reg2, lty=2)
> legend("bottomright", c("Male","Female"), lty=c(1,2), pch=c(20,1) )
The resulting plot shows the regression lines for males and females on the same plot.
fig-ancova-1

Advanced

We can fit both regression models with a single call to the lm() command using the nested structure of snout nested within sex (i.e. sex/snout) and removing the single intercept for the model so that separate intercepts are fit for each equation.
> reg.todo <- lm(pelvic~sex/snout - 1, data=gator)
> summary(reg.todo)
Call:
lm(formula = pelvic ~ sex/snout - 1, data = gator)
Residuals:
  Min       1Q   Median       3Q      Max
-0.85665 -0.33099 -0.08933  0.05774  1.57408
Coefficients:
             Estimate Std. Error t value Pr(>|t|)
sexfemale        -0.2199     1.2175  -0.181    0.858
sexmale           0.4527     0.8498   0.533    0.598
sexfemale:snout   6.7471     1.2031   5.608 3.76e-06 ***
sexmale:snout     6.5854     0.7558   8.713 7.73e-10 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.6209 on 31 degrees of freedom
Multiple R-squared: 0.9936,     Adjusted R-squared: 0.9928
F-statistic:  1213 on 4 and 31 DF,  p-value: < 2.2e-16

Conclusion

The results shown in the previous figure are consistent with our previous findings, pelvic width is positively related to snout-vent length. The relationship is linear for both males and females. The regression slope is positive and similar for both males and females (b ≈ 7.07; weighted average), which means that pelvic width grows faster than snout-vent length. Finally, the regression line of males intercepts with the y-axis at a higher value than for females, which means that males are larger.

Saturday, November 22, 2008

Spatial Statistics. Part I.

Spatial Statistics. Part I.

Ripley's K(r) function

Spatial statistics are becoming an important tool for population ecologists. These techniques allow researchers to answer questions regarding the spatial arrangements of individuals within a population and the causal factors that may be influencing spatial distributions. I will detail some basic spatial statistics using the spatstat library.
library(spatstat)
The first approach to a planar point pattern (ppp) is to describe the size of the plot, the number of individuals (i.e. points) and the density of such occurrences or instances. In spatial statistics density is usually referred to as intensity. We will be using the bei data-set included in the spatstat library. From the help page: «The dataset bei gives the positions of 3605 trees of the species Beilschmiedia pendula (Lauraceae) in a 1000 by 500 metre rectangular sampling region in the tropical rainforest of Barro Colorado Island.» The following code invokes the bei data-set and requests some summary statistics:
> data(bei) > summary(bei)
Planar point pattern: 3604 points Average intensity 0.00721 points per square metre Window: rectangle = [0, 1000] x [0, 500] metres Window area = 5e+05 square metres Unit of length: 1 metre
> plot(bei)
The previous results show a total of 3604 points in a 50 ha plot. The average intensity (i.e. density) is 0.00721 points per square meter or 72.1 trees per hectare. The position of the trees is shown in the following graph produced by the plot(bei) command:


The plot evidently shows that this species --B. pendula -- is located more commonly in certain areas of the plot, while absent in others. This suggests an aggregated spatial distribution, which is commonplace for tropical trees. To assess the spatial distribution of these points, we will use Ripley's K(r) function. This method, also known as Ripley's second moment reduced function, estimates the expected number of random points within a distance r of a randomly chosen point within a plot (Ripley 1976). Ripley's K(r) function is generally transformed as follows:
The L(r) function transforms the theoretically expected value for a random distribution into a horizontal line intersecting the P(0,0) point, thus it is more easily interpreted than the exponential K(r) function. Ripley's K(r) function is produced by the Kest() command in R. Given the large number of points in the bei data-set, calculations may take a while in computers with slow processors. The L(r) transformation is performed on-the-fly by the plot() command.
> a1 <- Kest(bei, correction="isotropic", nlarge=Inf) > plot(a1, sqrt(./pi)-r~r, ylab="L(r)")
The previous code request Ripley's K(r) function using the bie data-set. We specifically request the isotropic correction for edge effects. Spatstat's Kest() function has a restriction for data-sets larger than 3000 points. In order to circumvent this restriction we must include the nlarge=Inf option in the command. Results are stored in the a1 object.

Ripley's L(r) function is shown in the previous graph. The x-axis show the automatically selected radii (r) for which abundances are calculated, while the y-axis shows the L(r) function. The dotted red line is the expected value for a random distribution and the black solid line is the observed count. If observed values lie above the zero line (i.e. random expectation) one should suspect an aggregated distibution. Nevertheless, we need to assess if this deviation is large enough to reject the null hipothesis of a random spatial distribution or CSR (complete spatial randomness). To answer this question we need to create confidence intervals for the null hypothesis using the envelope() command. These calculations require a lot of computer resources given the large number of points in the bei data-set, therefore I reduced the number of simulations to 50 from the default nsim=99:
> sobre <- envelope(bei, Kest, nlarge=Inf, nsim=50)
Generating 50 simulations of CSR ... 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50.
Done. > sobre
Pointwise critical envelopes for K(r) Obtained from 50 simulations of simulations of CSR Significance level of pointwise Monte Carlo test: 2/51 = 0.0392156862745098 Data: bei Function value object (class 'fv') for the function r -> K(r) Entries: id     label     description --      -----     ----------- r      r      distance argument r obs      obs(r)      function value for data pattern theo      theo(r)      theoretical value for CSR lo      lo(r)      lower pointwise envelope of simulations hi      hi(r)      upper pointwise envelope of simulations -------------------------------------- Default plot formula: . ~ r Recommended range of argument r: [0, 125] Unit of length: 1 metre
> plot(sobre, sqrt(./pi)-r~r, lty=c(1,2,3,3), col=c(1,2,3,3), ylab="L(t)")
The resulting graph is as follows:

Given that the observed count (black solid line) is above the confidence interval (green doted lines) we can conclude that the spatial distribution of B. pendula trees significantly deviates from random expectations.

Conclusion

As expected for a tropical tree (Condit, 2000), Ripley's K(t) shows that B. pendula is spatially aggregated. References:
Condit R, Ashton PS, Baker P, Bunyavejchewin S, Gunatilleke S, Gunatilleke N, Hubbell SP, Foster RB, Itoh A, LaFrankie JV, Lee HS, Losos E, Manokaran N, Sukumar R, Yamakura T (2000) Spatial patterns in the distribution of tropical tree species. Science 288:1414―1418.
Ripley BD (1976) The second-order analysis of stationary point processes. Journal of Applied Probability 13:255-256.
Ripley BD (1977) Modeling spatial patterns. Journal of the Royal Statistical Society, B. 39:172-212.

Saturday, October 20, 2007

The Infamous type III SS

Before I started using R, I was under the impression that there was only one type of sums of squares (SS) that should be used: type III SS. It actually was a bit confusing when some people chose to use other types of SS. I also didn't know that SAS invented the term: type III SS. Once I started using R and came across my first unbalanced factorial design...reality hit me.

First of all, I couldn't get R to deploy my coveted type III SS. I could clearly see that any summary coming from lm() or aov() were clearly type I SS, or additive sums of squares. After some reading, I could get type II SS from different models, but the process seemed a bit to complicated. It had to be simpler!!

I played with the idea of posting in R-help , but I was (again) scared away by the terrible comments from some of the gurus. After searching for endless hours on Google, I came across a bunch of interesting information. I read comments and posts regarding how bad type III SS are, mainly because they allow to test for main effects even in the presence of interaction. Honestly I don't think is all that terrible, but I understand it shouldn't be done. I actually teach it that way to my students: "If an interaction is significant, the main effects are rarely interpretable".

On the other camp, I read proponents suggesting that type III SS are the only ones that allow hypothesis testing, giving that they are order independent. Also very true.

I made up my mind, when someone compared SAS to Micro$oft. From then on, I chose not to use type III SS if possible. Therefore, this entry is to teach how to deal with unbalanced factorials and how to get the appropriate SS and F values from R. And if you're so inclined, I will show you how to get the blasted type III SS's. Most of this information is scattered all over the Internet and I'm not to be credited for any of it. I just collected it for my personal use, and since this Blog serves as my basket case for R methods I easily forget, they ended up here.

Let's begin with an example. Let´s suppose we have a two-way unbalanced ANOVA to study the effect of forest type and species on herbivory. The biological hypothesis suggests that species vary in their susceptibility to herbivory, and the percentage of leaves damaged by herbivory are independent of forest type. We sample three different forest types: riparian, transition and mature forests. In each forest type, we collected samples from four different species and determined the percent of area removed from a randomly selected leaf. Given that species number is not constant on each site, the design is unbalanced. The data is presented in the following table:

Herbivory

Riparian
Transition
Mature
Sp1
42 44 36
13 19 22
33 26 33
21
31 3 25
25 24
Sp2
28 23 34
42 13
34 33 31
36
3 26 28
32 4 16
Sp3
1 29 19
11 9 7
1 6
21 1 9
3
Sp4
24 9 22
2 15
27 12 12
5 16 15
22 7 25
5 12

We read the data and included it in R as a data frame named: herb. Now we can check the number of replicates using the replications() command:

> replications(herbiv~bosque*especie, data=herb)
$bosque
bosque
   maduro    ripario transicion
       20         19         19

$especie
especie
sp1 sp2 sp3 sp4
15  15  12  16

$"bosque:especie"
           especie
bosque       sp1 sp2 sp3 sp4
 maduro       5   6   4   5
 ripario      6   5   3   5
 transicion   4   4   5   6
As we can clearly see, the model is highly unbalanced. We shall perform an analysis of variance in various ways. First we will fit a saturated model, and then vary the order of the factors in the model:
> mod.sat <- aov(herbiv~bosque*especie, data=herb)
> mod.sat.2 <- aov(herbiv~especie*bosque, data=herb)

The summary tables for both models show that SS are calculated sequentially.

> summary(mod.sat)

               Df Sum Sq Mean Sq F value    Pr(>F) 
bosque          2  464.0   232.0  2.4766   0.09516 .
especie         3 2810.8   936.9 10.0017 3.434e-05 ***
bosque:especie  6  530.4    88.4  0.9436   0.47348 
Residuals      46 4309.1    93.7                   

> summary(mod.sat.2)
              Df Sum Sq Mean Sq F value    Pr(>F) 
especie         3 2834.8   944.9 10.0871 3.186e-05 ***
bosque          2  440.0   220.0  2.3486    0.1069 
especie:bosque  6  530.4    88.4  0.9436    0.4735 
Residuals      46 4309.1    93.7                   

Although very close, the SS for the species factor (i.e. especies) is different depending on the order of the factors. If species comes first then (SS = 2834,8), but if species is the second term in the model then SS=2810.8. These SS are all sequential SS´s, known by SAS as Type I SS. In the SAS world, now we would use proc GLM and get type III sums of squares, and test hypothesis regarding the interaction and the main effects. Nonetheless, this approach may lead us down a dangerous path, since it allows us to test for main effects even in the presence of a significant interaction.

The R way The advisable method in R, is to search for the most parsimonious model and then obtain SS by comparing models that differ in the number of parameters (i.e. factors) included. This procedure is automagically done by the drop1() command. This command, as its name implies, drops arguments from the model and compares the original model to the reduced one. It generally begins by removing non-significant interactions. The results given by drop1() command are order-independent, therefore the following code examples will only be performed on mod.sat:

> drop1(mod.sat, test="F")
Single term deletions

Model:
herbiv ~ bosque * especie
              Df Sum of Sq    RSS    AIC F value  Pr(F)
<none>                      4309.1  273.9            
bosque:especie  6 &nbsp;   530.4 4839.5  268.6  0.9436 0.4735
>

The drop1() command used in the previous code, required two parameters or options: the name of the fitted aov() model, and the type of test to perform. We asked R to drop terms from "mod.sat", our saturated model. We also requested F statistics, which compare the original model, with the new model without the term. AIC, the Akaike Information Criterion, measures the goodness of fit of a statistical model taking into account the number of parameters included. The AIC statistic allows us to find the best model that fits the data, with the minimum number of parameters.

One should choose the model with the lowest AIC value. In the previous output, removing the interaction term: bosque:especie; produces a better model with a lower AIC than the saturated formula (268.6 vs 273.9). Therefore, we can conclude that removing the interaction term should benefit our model. The F statistic provided is based on SS calculated by comparing models with and without the interaction term, and hence, can be used to report the non-significance of the interaction.

We now fit a linear model without the interaction term, and request the appropriate sums-of-square with the drop1().

> mod1 <- aov(herbiv~bosque+especie, data=herb)

> drop1(mod1, test="F")

Single term deletions

Model:
herbiv ~ bosque + especie
       Df Sum of Sq    RSS    AIC F value     Pr(F) 
<none>               4839.5  268.6                   
bosque   2     440.0 5279.5  269.6  2.3639    0.1041 
especie  3    2810.8 7650.2  289.2 10.0672 2.461e-05 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The output from the drop1() command shows that removing any of the natural components (i.e. 'bosque' or 'especie') would lower the goodness of fit of the model. This can be seen by inspecting the AIC statistics. We see that by removing <none> of the factors, we get an AIC of 268.6. However, if we remove 'bosque' or 'especie', we get a higher AIC value, thus suggesting that we have the most parsimonious model.

F-statistics are computed by comparing the original model, with one where the factor is removed. We can see that 'bosque' is not significant, while 'especie' is highly significant (F=10.07; df=2,46; p<0.001). As I precieve it, these are all type III SS, but I could be wrong. We can clearly see that they are order independent, by fitting a model where 'especie' comes first:

> drop1(aov(herbiv~especie+bosque, herb), test="F") Single term deletions

Model:
herbiv ~ especie + bosque
       Df Sum of Sq    RSS    AIC F value     Pr(F) 
<none>               4839.5  268.6                   
especie  3    2810.8 7650.2  289.2 10.0672 2.461e-05 ***
bosque   2     440.0 5279.5  269.6  2.3639    0.1041 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The results do not deviate from the previous analysis, thus confirming that this type of analysis is order independent. My only grievance with this type of analysis, is that the ANOVA table has to be computed by hand. I still haven't figured out if this can be done automatically.

Type III SS from R

If you want to get type III sums-of-square anyway (maybe due to extraordinary circumstances: your boss demands them!) you can easily get them from R. As Dr. Venables says: you just have to know where to look for them.

I will not go into detail on how this works, mainly, because I don't know how it works. To get type III-SS, you have to do the following:

1. Change the default contrast matrix used by R, to one which produces orthogonal contrasts such as contr.helmert or contr.sum:

> options(contrasts=c("contr.helmert", "contr.poly"))

2. Fit a new anova model (saturated), which uses the new contrast matrix we selected in step 1.

> mod.III <- aov(herbiv~bosque*especie, data=herb)

3. Then use the drop1() command, choosing to remove all objects using the formula shorthand notation .~. :

> drop1(mod.III, .~., test="F") Single term deletions

Model:
herbiv ~ bosque * especie 

Df Sum of Sq RSS AIC F value Pr(F) <none> 4309.1 273.9 bosque 2 436.3 4745.4 275.5 2.3289 0.1088 especie 3 2753.8 7062.8 296.5 9.7989 4.108e-05 *** bosque:especie 6 530.4 4839.5 268.6 0.9436 0.4735 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

And voila!...You get type III sums-of-square, which should make any SAS user happy. The Car package, has built in functions to get type-III SS, but I've read they produce funny output sometimes. I've never tested this, since I don't like installing many libraries to do things you can get from the base stats module.

Conclusion

In my opinion the type III sums-of-square issue, is a bit technical for the average user. Although some suggest that the average user shouldn't be doing statistics, it is commonplace for many students and some researchers to have to deal with anova on their own, with very little statistics background. I think R does the correct thing by not supplying the infamous SS's easily, and forcing the user to think about what he/she really wants. Nonetheless, this discussion should be part of R's help files, and the correct procedure should be also easier to find. Finally, the drop1() method should provide a properly formatted ANOVA table, but again, I may just not know how to do it...and the correct procedure is hidden somewhere within R's mysterious innards.

Saturday, October 06, 2007

Contrasts in R

One of the most neglected topics in Biostatistics is the calculation of contrasts in ANOVA. Most researchers are content with simply calculating an ANOVA table and stating that differences among groups are statistically significant. Some even present box-plots or any other graphic that show where the differences lie. However, few researchers actually test the differences they are looking for.

An ANOVA is designed to test a statistical hypothesis. It looks at the means of each treatment or level, and determines if all the means are equal or not. Nevertheless, the biological hypotheses introduced by the researchers need to be specifically addressed with contrasts. One can generalize and state that while ANOVA analyzes the statistical hypothesis, contrasts look into the biological hypothesis.

The following entry will show how to create a contrast in R. I will not go into detail regarding the theory behind contrasts. I suggest you read a good book on Experimental Design.

Let’s begin by using a data set included in R. We will use the InsectSprays data frame. This experiment shows the number of insects killed (variable count) by using six different insecticides (A:F). The data is loaded by using:

> data(InsectSprays)

Once the data has been invoked, we can start working on it. The first thing we want to do is to determine if the model is balanced. We can use the summary() command to assess some basic information on the data.frame:

> summary(InsectSprays)

count spray

Min. : 0.00 A:12

1st Qu.: 3.00 B:12

Median : 7.00 C:12

Mean : 9.50 D:12

3rd Qu.:14.25 E:12

Max. :26.00 F:12

The previous output shows that we have a balanced design, with 12 replicates per group. Since this is a dataframe the alphanumeric variable, spray, is immediately recognized by R as a factor.

Let’s imagine there are three main questions. First, we want to determine if there are any differences between the spray treatments. Secondly, the researcher wants to know if the first three sprays differ from the latter three. Third, there is strong indication that sprays: A, B, and F; should have a stronger effect than the remaining treatments. Since contrasts are pre-planned or a priori comparisons, we should start by creating the contrasts matrix. We need to create a vector of coefficients for each comparison, and then bind them in a matrix:

> c1 <- c(1,1,1,-1,-1,-1)

> c2 <- c(1,1,-1,-1,-1,1)

> mat <- cbind(c1,c2)

Secondly, we need to specify that the contrast matrix mat should be used to compute SS during anova calculations. We achieve this by assigning the contrasts matrix to the factor sprays using the contrasts() command:

> contrasts(InsectSprays$spray) <- mat

It should be noted, that once this assignment is implemented, the aov() command will calculate SS for the contrasts established in each column of mat. If you wish to change the contrasts, then a new contrasts matrix should be created and assigned to the factor.

We now perform the analysis of variance, and request a summary table. The anova table is split to include the contrasts:

> model1 <- aov(count ~ spray, data = InsectSprays)

> summary(model1, split=list(spray=list("First 3 vs other"=1, "ABF vs CDE"=2)))

Df Sum Sq Mean Sq F value Pr(>F)
spray5

2668.83

533.77

34.702

<2e-16

***
spray: First 3 vs other 1

93.39

93.39

6.0716

0.01635

*
spray: ABF vs CDE 1

2558.67

2558.67

166.3495

< 2e-16

***
Residuals 66

1015.17

15.38

---

Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1

Let’s analyze the previous code. First we conduct an analysis of variance with the aov() command. The results are stored in the model1 object. Immediately following, we request a summary table with the summary() command. The summary command includes the option: split. The split option provides a list of factors where contrasts are stored. Within each factor, we also provide a list which includes the names of the contrasts (i.e. each column) stored in the contrasts matrix columns (i.e. mat).

The ANOVA table shows a significant effect of the spray varieties on insect moratility. The contrasts suggest significant differences between the first three and the latter three spray types. Finally, we see that most of the treatment SS are explained by the comparison of the ABF and CDE groups. A box-plot clearly shows that the main differences lie between the groups compared by the second contrasts. The comparisons among groups are better observed using a boxplot:

> boxplot(count ~ spray, data = InsectSprays, + xlab = "Type of spray", ylab = "Insect count", + main = "InsectSprays data", col = "skyblue")

Saturday, September 15, 2007

Neighbor Joining Tree with Ape

Today I used R to create a neighbor joining phenogram, using the ape library. The phenogram will be used to visualize a species genetic differences between six different locations in Costa Rica. Genetic distances were calculated from microsatellite data using GenAlEx (GenAlEx), because I haven't figured out how to calculate Nei's genetic dissimilarities with R. This is the procedure I used:

First I imported the genetic distance matrix from Exc..l. The matrix used was symmetric, meaning that the same distances were mirrored above and below the diagonal. I copied the matrix into the clipboard from Exc..l using Ctrl-C. The distance matrix included row and coloumn names, the six different locations. The matrix was imported in R by assigning it to an object named m:

> m <- as.matrix(read.table("clipboard", head=T, row.names=1))

The previous command imports the data stored in the clipboard and transforms the data frame into a matrix format. The options of the read.table() command: head=T and row.names=1, tell R that the first line of the matrix is a header and that row names are stored in the first column, respectively. After checking the matrix was imported correctly, I proceeded to load the ape library.

> library(ape)

The ape (analysis in phylogenetics and evolution) contains various methods for the analysis of genetic and evolutionary data. Ape also provides commands designed to calculate distances in DNA sequences. I will tackle those in another post. Now back to my tree. After importing the distance matrix, and including the ape library, I proceeded to create the phylogenetic tree with the nj() command:

> arbol <- nj(as.dist(m), "unrooted") > plot(arbol)

The "unrooted" parameter provided to the nj() command produces, as expected, an unrooted tree. The resulting tree topography was stored in the 'arbol' object. The plot command produced:

We can observe how 'Puriscal' is located between two well demarcated groups, one including 'Guapiles' and 'Guatuso' and the other three populations in the second group.


Friday, September 29, 2006

Mantel Partial Regression

To perform a partial mantel regression on three different distance matrices: > library(vegan) > mantel.partial (xdis, ydis, zdis, + method = "pearson", permutations = 1000) Example. Matrix 1: Genetic distance, Nei's D. Matrix 2: Geographic distance Matrix 3: Regional correspondance (i.e. binary matrix, 1 if two populations belon to the same region, zero otherwise) This will allow to test if a correlation between geographic distance and genetic differentiation exists, when regional correspondance is taken into account.