In Chapter 4 (section 4.4) of *Applied Longitudinal Data Analysis* (ALDA), Singer and Willett recommend fitting two simple unconditional models before you begin multilevel model building in earnest. These two models “allow you to establish: (1) whether there is systematic variation in your outcome that is worth exploring; and (2) where that variation resides (within or between people).” (p. 92) This is a great idea. Why start building models if there is no variation to explain? In this post I want to summarize these two models for reference purposes.

**Model 1: The Unconditional Means Model**

- The keyword here is “means”. This model has one fixed effect that estimates the
*grand mean*of the response across all occasions and individuals. - The main reason to fit this model is to examine the random effects (i.e., the within-person and between-person variance components). This tells us the amount of variation that exists at the within-person level and the between-person level.
- Model specification:
- = grand mean (fixed effect)
- = the amount person i’s mean deviates from the population mean (between-person)
- = the amount the response on occasion j deviates from person i’s mean (within-person)

- Use the intraclass correlation coefficient to describe the proportion of the total outcome variation that lies “between” people:
- In the unconditional means model the intraclass correlation coefficient is also the “error autocorrelation coefficient”, which estimates the average correlation between any pair of composite residuals:
- Sample R code for fitting the unconditional means model (where “id” = person-level grouping indicator):
library(nlme) lme(response ~ 1, data=dataset, random= ~ 1 | id)

Or this:

library(lme4) lmer(response ~ 1 + (1 | id), dataset)

To replicate the *Unconditional Means Model* example in ALDA, the UCLA stats page suggests the following:

alcohol1 <- read.table("http://www.ats.ucla.edu/stat/r/examples/alda/data/alcohol1_pp.txt", header=T, sep=",") library(nlme) model.a <- lme(alcuse~ 1, alcohol1, random= ~1 |id) summary(model.a)

This works OK, but returns slightly different results because it fits the model using REML (Restricted Maximum Likelihood) instead of ML (Maximum Likelihood). It also does not return the estimated between-person variance . We can "fix" the first issue by including the argument method="ML". There doesn't appear to be anything we can do about the second. However, the lmer() function allows us to replicate the example and obtain the same results presented in the book, as follows (notice we have to specify ML implicitly with the argument REML = FALSE):

model.a1 <- lmer(alcuse ~ 1 + (1 | id), alcohol1, REML = FALSE) summary(model.a1)

The output provides the values discussed in the book in the "Random effects" section under the variance column:

> summary(model.a1) Linear mixed model fit by maximum likelihood Formula: alcuse ~ 1 + (1 | id) Data: alcohol1 AIC BIC logLik deviance REMLdev 676.2 686.7 -335.1 670.2 673 Random effects: Groups Name Variance Std.Dev. id (Intercept) 0.56386 0.75091 Residual 0.56175 0.74950 Number of obs: 246, groups: id, 82 Fixed effects: Estimate Std. Error t value (Intercept) 0.9220 0.0957 9.633

The "Random effect" *id* has variance = 0.564. That's the between-person variance. The "Random effect" *Residual* has variance = 0.562. That's the within-person variance. We can access these values using "summary(model.a1)@REmat" and calculate the intraclass correlation coefficient like so:

icc_n <- as.numeric(summary(model.a1)@REmat[1,3]) icc_d <- as.numeric(summary(model.a1)@REmat[1,3]) + as.numeric(summary(model.a1)@REmat[2,3]) icc_n / icc_d [1] 0.5009373

**Model 2: The Unconditional Growth Model**

- This model partitions and quantifies variance across people
*and*time. - The fixed effects estimate the starting point and slope of the population average change trajectory.
- Model specification:
- = average intercept (fixed effect)
- = average slope (fixed effect)
- = the amount person i's intercept deviates from the population intercept
- = the amount person i's slope deviates from the population slope
- = the amount the response on occasion j deviates from person i's true change trajectory
- and have covariance

- The residual variance summarizes the average scatter of an individual's observed outcome values around his/her own true change trajectory. Compare this to the same value in the unconditional means model to see if within-person variation is systematically associated with linear time.
- The level-2 variance components, and quantify the unpredicted variability in the intercept and slope of individuals. That is, they assess the scatter of a person's intercept and slope about the population average change trajectory. DO NOT compare to the same values in the unconditional means model since they have a different interpretation.
- The level-2 covariance quantifies the population covariance between the true initial status (intercept) and true change (slope). Interpretation is easier if we re-express the covariance as a correlation coefficient:
- Sample R code for fitting the unconditional growth model (where "id" = person-level grouping indicator):
lme(response ~ time , data=dataset, random= ~ time | id)

Or this:

lmer(alcuse ~ time + (time | id), dataset)

To replicate the *Unconditional Growth Model* example in ALDA, the UCLA stats page suggests the following:

alcohol1 <- read.table("http://www.ats.ucla.edu/stat/r/examples/alda/data/alcohol1_pp.txt", header=T, sep=",") library(nlme) model.b <- lme(alcuse ~ age_14 , data=alcohol1, random= ~ age_14 | id, method="ML") summary(model.b)

However I think the following is better as it gives the same values in the book:

model.b1 <- lmer(alcuse ~ age_14 + (age_14 | id), alcohol1, REML = FALSE) summary(model.b1)

For instance it provides variance values instead of standard deviation values. It doesn't really matter in the long run, but it makes it easier to quickly check your work against the book. Here's the output:

> summary(model.b1) Linear mixed model fit by maximum likelihood Formula: alcuse ~ age_14 + (age_14 | id) Data: alcohol1 AIC BIC logLik deviance REMLdev 648.6 669.6 -318.3 636.6 643.2 Random effects: Groups Name Variance Std.Dev. Corr id (Intercept) 0.62436 0.79017 age_14 0.15120 0.38885 -0.223 Residual 0.33729 0.58077 Number of obs: 246, groups: id, 82 Fixed effects: Estimate Std. Error t value (Intercept) 0.65130 0.10508 6.198 age_14 0.27065 0.06246 4.334 Correlation of Fixed Effects: (Intr) age_14 -0.441

Again the main section to review is the "Random effects". The Residual variance (within-person) has decreased to 0.337 from 0.562. We can conclude that (i.e., 40%) of the within-person variation in the response is systematically associated with linear time. We also see the negative correlation (-0.223) between the true initial status (intercept) and true change (slope). However, the book notes this correlation is not statistically significant. As you can see this is not something the output of the lmer object reports. The book mentions in chapter 3 (p. 73) that statisticians disagree about the effectiveness of such significance tests on variance components, and I can only assume the authors of the lme4 package question their use. Finally, we notice the level-2 variance components: 0.624 and 0.151. These provide a benchmark for predictors' effects as the authors proceed to build models.

Pingback: Comparing Multilevel Models using Deviance Statistics (Ch 4 of ALDA) | Statistics you can Probably Trust