The stats::filter() function

The base R function filter() can be used to calculate moving averages. This is one of the base R functions masked when the {dplyr} package is loaded.

Before we see how it works, let’s create some toy data.

n <- 100
t <- as.Date(seq(n), origin = as.Date("01/01/2022", "%m/%d/%Y"))
set.seed(1)
y <- cos(pi*seq(-2,2, length.out = n)) + rnorm(n, sd = 0.5)
d <- data.frame(t, y)

library(ggplot2)
ggplot(d) +
  aes(t, y) +
  geom_line()

3 day moving average

Let’s say we want to calculate a 3 day moving average. Two ways to approach this:

  1. on any given day, take average of day before, current day, and next day. In other words use data from both sides of current day.

  2. on any given day, take average of prior two days and current day. In other words use data from only one side of current day.

The first approach is the default of the filter() function. The first argument is the vector of data we want to calculate the moving average for. The second argument is the filter we want to apply to the data. These are coefficients we apply to the data before summing. For a 3 day moving average this is a vector of three 1/3, which we can create using rep(1/3, 3). The sides=2 argument says use both sides of the data.

d$ma3 <- stats::filter(d$y, filter = rep(1/3, 3), sides = 2)

Let’s look at the first three values. There’s a NA for day 1 because we have no data for the prior day.

d$ma3[1:3]
## [1]        NA 0.7735613 1.1199731

The first 3 day moving average is 0.7735613 calculated at day 2. This is the average of days 1, 2, and 3.

mean(y[1:3])
## [1] 0.7735613

Notice we can get the same result by multiplying each data point by 1/3 and summing. This is the “filter” applied to the data by the filter() function.

sum(1/3 * y[1:3])
## [1] 0.7735613

One reason to calculate a moving average is to smooth out day-to-day variation. Below we plot the original data with the 3 day moving average superimposed.

ggplot(d) +
  aes(t, y) +
  geom_line(alpha = 1/4) +
  geom_line(aes(y = ma3), color  = "red")
## Warning: Removed 2 rows containing missing values or values outside the scale range
## (`geom_line()`).

The other approach is to use one side of the data. That means setting sides= 1. Now we get 2 NAs at the beginning because days 1 and 2 did not have two prior days.

d$ma3 <- stats::filter(d$y, filter = rep(1/3, 3), sides = 1)
d$ma3[1:3]
## [1]        NA        NA 0.7735613

odd versus even numbers

Odd numbers are good to use for sides = 2 because we’ll get an equal number of days before and after the current day. If an even number is used, “more of the filter is forward in time than backward” (?filter).

For example, consider a 6 day moving average:

d$ma6 <- stats::filter(d$y, filter = rep(1/6, 6), sides = 2)
d$ma6[1:3]
## [1]        NA        NA 0.9133886

The first 6 day moving average occurs at day 3 which is the average of the current day, the 2 previous days, and the 3 following days. Notice “more of the filter is forward in time than backward”:

1 2 3 4 5 6

sum(1/6 * y[1:6])
## [1] 0.9133886

Other functions

Three other functions for calculating moving averages are:

  1. runmean() from the {caTools} package
  2. frollmean() from the {data.table} package
  3. roll_mean() from the {RcppRoll} package

Below I use all three to replicate the stats::filter() result.

For caTools::runmean() we need to specify endrule = "NA", otherwise it uses an algorithm to calculate means in the extremes using smaller windows than 3.

For data.table::frollmean() we need to specify align = "center", which is the same as sides = 2 for filter(). Otherwise it defaults to align = "right" which is equivalent to sides = 1 for filter().

For RcppRoll::roll_mean we need to specify fill = NA to pad the output with missing values in the extremes. Otherwise, in this case, it returns a vector of length 98 instead of 100.

d$ma3 <- stats::filter(d$y, filter = rep(1/3, 3), sides = 2)
d$ma3_runmean <- caTools::runmean(d$y, k = 3, endrule = "NA")
d$ma3_frollmean <- data.table::frollmean(d$y, n = 3, align = "center")
d$ma3_rcpproll <- RcppRoll::roll_mean(d$y, n = 3, align = "center",
                                      fill = NA)
head(d[,c("ma3", "ma3_runmean", "ma3_frollmean", "ma3_rcpproll")])
##         ma3 ma3_runmean ma3_frollmean ma3_rcpproll
## 1        NA          NA            NA           NA
## 2 0.7735613   0.7735613     0.7735613    0.7735613
## 3 1.1199731   1.1199731     1.1199731    1.1199731
## 4 1.1049153   1.1049153     1.1049153    1.1049153
## 5 1.0532159   1.0532159     1.0532159    1.0532159
## 6 0.8003626   0.8003626     0.8003626    0.8003626

The Secret Weapon: section 10.9 of Regression and Other Stories

In section 10.9 of Regression and Other Stories, the authors introduce the idea of fitting the same model to many data sets. They begin the section by saying, “it is common to fit a regression model repeatedly…to subsets of an existing data set.” But then two paragraphs later call this idea a “secret weapon” because “it is so easy and powerful but yet is rarely used.” So which is it? Common or rarely used?

Anyway, here’s the code they use to demonstrate this idea using NES data. This creates Figure 10.9 on page 149. While I enjoyed working through this code and figuring out how it works, I decided to re-implement it using data frames and ggplot2.

First I load the {rstanarm} package and read in the data.

library(rstanarm)
data <- read.table("https://github.com/avehtari/ROS-Examples/raw/refs/heads/master/NES/data/nes.txt")

Next I modify their custom function as follows. The changes are in the coefs object that is created. Instead of a vector, I return a data frame. I also add the variable name and year to the data frame.

coef_names <- c("Intercept", "Ideology", "Black", "Age_30_44", 
                "Age_45_64", "Age_65_up", "Education", "Female", "Income")
regress_year <- function (yr) {
  this_year <- data[data$year==yr,]
  fit <- stan_glm(partyid7 ~ real_ideo + race_adj + factor(age_discrete) +
                    educ1 + female + income,
                  data=this_year, warmup = 500, iter = 1500, refresh = 0,
                  save_warmup = FALSE, cores = 1, open_progress = FALSE)
    coefs <- data.frame(var = coef_names, 
                      coef = coef(fit), 
                      se = se(fit), 
                      year = yr)
}

Now I run the function using lapply(), combine the list of data frames into one data frame, add upper and lower limits of a 50% confidence interval, and set the variable name as a factor.

sum2 <- lapply(seq(1972,2000,4), regress_year)
sumd <- do.call(rbind, sum2)
sumd$upper <- sumd$coef + sumd$se*0.67
sumd$lower <- sumd$coef - sumd$se*0.67
sumd$var <- factor(sumd$var, levels = coef_names)

And finally I create the plot using {ggplot2}.

library(ggplot2)
ggplot(sumd) +
  aes(x = year, y = coef) +
  geom_point() +
  geom_errorbar(mapping = aes(ymin = lower, ymax = upper), width = 0) +
  geom_hline(yintercept = 0, linetype = 2) +
  facet_wrap(~ var, scales = "free") +
  theme_classic()

According to the book, “ideology and ethnicity are the most important, and their coefficients have been changing over time.”

The above approach uses Bayesian modeling via the {rstanarm} package. We can also deploy the secret weapon using a frequentist approach via the lmList() function from the {lme4} package. Below I first subset the data to select only the columns I need for the years 1972 – 2000. This is necessary due to the amount of missingness in the data.

library(lme4)
vars <- c("partyid7", "real_ideo" , "race_adj" , "age_discrete" ,
  "educ1" , "female" , "income", "year")
d <- data[data$year %in% seq(1972,2000,4),vars]
fm1 <- lmList(partyid7 ~ real_ideo + race_adj + factor(age_discrete) +
                educ1 + female + income | year, 
              data = d)

Calling summary on the object lists a summary of coefficients over time.

summary(fm1)
## Call:
##   Model: partyid7 ~ real_ideo + race_adj + factor(age_discrete) + educ1 + female + income | NULL 
##    Data: d 
## 
## Coefficients:
##    (Intercept) 
##         Estimate Std. Error    t value     Pr(>|t|)
## 1972  1.76583047  0.3713743  4.7548538 2.019151e-06
## 1976  1.11162271  0.4038796  2.7523615 5.929490e-03
## 1980  1.70511202  0.5101462  3.3423987 8.342261e-04
## 1984  2.27900236  0.3731396  6.1076403 1.056356e-09
## 1988  3.04892311  0.4003568  7.6155146 2.913307e-14
## 1992  1.44675559  0.3479402  4.1580579 3.241743e-05
## 1996 -0.06088684  0.4523491 -0.1346014 8.929303e-01
## 2000  0.71444233  0.6875610  1.0390967 2.987898e-01
##    real_ideo 
##       Estimate Std. Error  t value      Pr(>|t|)
## 1972 0.4846373 0.04058343 11.94175  1.320025e-32
## 1976 0.5874170 0.04129972 14.22327  2.220613e-45
## 1980 0.6039183 0.04995714 12.08873  2.298924e-33
## 1984 0.6262146 0.03860522 16.22098  2.771748e-58
## 1988 0.6219214 0.03971156 15.66097  1.659414e-54
## 1992 0.7075364 0.03500057 20.21499  9.375655e-89
## 1996 0.9364460 0.04067273 23.02393 8.874569e-114
## 2000 0.7892252 0.06129477 12.87590  1.399443e-37
##    race_adj 
##       Estimate Std. Error    t value     Pr(>|t|)
## 1972 -1.105586  0.1870994  -5.909083 3.575067e-09
## 1976 -1.097028  0.1980980  -5.537805 3.155805e-08
## 1980 -1.284468  0.2429898  -5.286098 1.281122e-07
## 1984 -1.483666  0.1812912  -8.183885 3.153604e-16
## 1988 -1.732068  0.1721402 -10.061961 1.109738e-23
## 1992 -1.346218  0.1564942  -8.602349 9.232858e-18
## 1996 -1.220354  0.1846709  -6.608263 4.126844e-11
## 2000 -1.079103  0.2948775  -3.659496 2.542700e-04
##    factor(age_discrete)2 
##         Estimate Std. Error    t value   Pr(>|t|)
## 1972 -0.18895861  0.1373869 -1.3753757 0.16905188
## 1976 -0.03744826  0.1486541 -0.2519154 0.80111262
## 1980 -0.14625870  0.1939140 -0.7542451 0.45072333
## 1984 -0.23055707  0.1410016 -1.6351374 0.10205793
## 1988 -0.30995572  0.1512620 -2.0491320 0.04048039
## 1992 -0.21210428  0.1482977 -1.4302602 0.15267977
## 1996 -0.02979256  0.1829521 -0.1628435 0.87064561
## 2000 -0.45006072  0.2959671 -1.5206443 0.12838698
##    factor(age_discrete)3 
##         Estimate Std. Error    t value     Pr(>|t|)
## 1972 -0.04740623  0.1347712 -0.3517535 7.250320e-01
## 1976 -0.05728971  0.1461493 -0.3919945 6.950723e-01
## 1980 -0.38373449  0.1947510 -1.9703849 4.882726e-02
## 1984 -0.66715561  0.1531480 -4.3562810 1.338718e-05
## 1988 -0.45235822  0.1629587 -2.7759065 5.517059e-03
## 1992 -0.50779145  0.1591265 -3.1911175 1.422483e-03
## 1996 -0.27181837  0.1907075 -1.4253153 1.541034e-01
## 2000 -0.71662580  0.2995257 -2.3925355 1.675438e-02
##    factor(age_discrete)4 
##        Estimate Std. Error    t value    Pr(>|t|)
## 1972  0.5125853  0.1772718  2.8915225 0.003843694
## 1976  0.4486697  0.1889933  2.3739982 0.017619118
## 1980  0.0238989  0.2292029  0.1042696 0.916957892
## 1984 -0.2458586  0.1823842 -1.3480260 0.177686577
## 1988 -0.4002949  0.1916484 -2.0886944 0.036765446
## 1992 -0.4130676  0.1714062 -2.4098761 0.015979426
## 1996 -0.1150134  0.2071408 -0.5552427 0.578743539
## 2000 -0.4803803  0.3324209 -1.4450965 0.148468300
##    educ1 
##        Estimate Std. Error  t value     Pr(>|t|)
## 1972 0.29708140 0.05826098 5.099149 3.486782e-07
## 1976 0.27725165 0.06059524 4.575469 4.820110e-06
## 1980 0.09550340 0.08261012 1.156074 2.476841e-01
## 1984 0.07262794 0.06501416 1.117110 2.639796e-01
## 1988 0.14341231 0.06473108 2.215509 2.675200e-02
## 1992 0.28034637 0.05916944 4.738026 2.193737e-06
## 1996 0.25114897 0.07093933 3.540335 4.017956e-04
## 2000 0.24461379 0.10801197 2.264692 2.355714e-02
##    female 
##          Estimate Std. Error    t value  Pr(>|t|)
## 1972 -0.005967246 0.10018983 -0.0595594 0.9525080
## 1976  0.133917095 0.10627385  1.2601133 0.2076637
## 1980  0.028503792 0.13923027  0.2047241 0.8377927
## 1984 -0.013415054 0.10477430 -0.1280376 0.8981223
## 1988 -0.079195190 0.11057733 -0.7161974 0.4738895
## 1992 -0.068672465 0.09994152 -0.6871265 0.4920221
## 1996 -0.059622697 0.11476049 -0.5195403 0.6033978
## 2000 -0.094042287 0.17291800 -0.5438548 0.5865559
##    income 
##        Estimate Std. Error  t value     Pr(>|t|)
## 1972 0.16082722 0.05077800 3.167262 1.544359e-03
## 1976 0.17180218 0.05780102 2.972303 2.964176e-03
## 1980 0.22816242 0.07197937 3.169831 1.530785e-03
## 1984 0.22486650 0.05569547 4.037429 5.452323e-05
## 1988 0.06352031 0.05891789 1.078116 2.810132e-01
## 1992 0.13290845 0.05188253 2.561719 1.043296e-02
## 1996 0.20801839 0.05996115 3.469220 5.246072e-04
## 2000 0.23581142 0.08847826 2.665190 7.709274e-03
## 
## Residual standard error: 1.815982 on 8351 degrees of freedom

To create the plot, I need to do some data wrangling. Below I extract the coefficients from the summary, which returns an array. I then use the adply() function from the {plyr} package to convert the array to a data frame. Then I add year, upper 50% CI limit, and lower 50% CI limit to the data frame. I also change the variable column to a factor so the order of the coefficients will be preserved in the plot.

sout <- summary(fm1)$coefficients

library(plyr)
sumd2 <- adply(sout, .margins = 3, .id = "Var")
sumd2$year <- seq(1972,2000,4)
sumd2$upper <- sumd2$Estimate + sumd2$`Std. Error`*0.67
sumd2$lower <- sumd2$Estimate - sumd2$`Std. Error`*0.67
sumd2$Var <- factor(sumd2$Var, labels = coef_names)

And once again I create the plot.

ggplot(sumd2) +
  aes(x = year, y = Estimate) +
  geom_point() +
  geom_errorbar(mapping = aes(ymin = lower, ymax = upper), width = 0) +
  geom_hline(yintercept = 0, linetype = 2) +
  facet_wrap(~ Var, scales = "free") +
  theme_classic()

The result is almost identical to the one created using {rstanarm}.

US Names: section 2.3 of Regression and Other Stories

In section 2.3 of Regression and Other Stories, Gelman et al. discuss using graphs to learn more about data. The third example involves the exploration of names in the United States from 1880 to 2010. In this post I recreate figures 2.6 – 2.9 using R code that makes more sense to me. The original data and code are in this GitHub repo.

First read in the data and drop the superfluous first column which just contains row numbers:

allnames <- read.csv("https://github.com/avehtari/ROS-Examples/raw/master/Names/data/allnames_clean.csv")
allnames$X <- NULL
dim(allnames)
## [1] 98012   133

Notice this data set is quite large and “wide” with over 98,000 rows and 133 columns. There is one row per name per sex with counts per year spanning from 1880 to 2010. Here’s a portion of the data:

allnames[1:3,1:6]
##   name sex X1880 X1881 X1882 X1883
## 1 Mary   F  7065  6919  8149  8012
## 2 Anna   F  2604  2698  3143  3306
## 3 Emma   F  2003  2034  2303  2367

Apparently 7,065 girls were named Mary in 1880, and then 6,919 in 1881, and so on.

Figs 2.6 and 2.7 show the distribution of the last letters of boys’ first names for the years 1906, 1956, and 2006. I wrote a function to create the plots for a given year.

last_letter <- function(year){
  xyear <- paste0("X", year)
  d <- subset(allnames, sex == "M", select = c("name", xyear))
  n <- nchar(d$name)
  d$last <- substr(d$name, n, n)
  d$last <- factor(d$last, levels = letters, labels = letters)
  p_data <- aggregate(d[[xyear]], by = list(d[["last"]]), sum)
  names(p_data) <- c("letter", "count")
  p_data$p <- p_data[["count"]]/sum(p_data[["count"]])
  barplot(p ~ letter, data = p_data, 
          main = paste("Last letter of boys' names in", year))
}

Now create the plots:

last_letter(1906)

last_letter(1956)

last_letter(2006)

The 2006 plot shows a trend in giving boys names that end with the letter “n”, such as Ethan, Jayden, Aiden, Mason, and Logan.

Fig 2.8 looks at the distribution of the letters of boys’ last names over time. To create this plot I found it helpful to use the {tidyr} and {dplyr} packages. First I reshaped the data to “long” format using pivot_longer() so there is one row per name per sex per year. Then I removed the “X” from the year value and once again extracted the last letter of the last names using the nchar() and substr() functions.

library(tidyr)
allnames_long <- pivot_longer(allnames, cols = X1880:X2010, 
                              names_to = "year", values_to = "count")
allnames_long$year <- sub("X", "", allnames_long$year)
n <- nchar(allnames_long$name)
allnames_long$last <- substr(allnames_long$name, n, n)
allnames_long$last <- factor(allnames_long$last, 
                             levels = letters, labels = letters)

Next I used some {dplyr} functions to calculate the proportions of boys’ names ending in one of the 26 English letters by year. In the last step I create a new variable called “last2” to indicate if the letter was an N, D or Y. This is to help create a plot matching figure 2.8

library(dplyr)
last_letter_males <- allnames_long |> 
  filter(sex == "M") |> 
  group_by(year, last) |> 
  summarise(letter_count = sum(count)) |> 
  mutate(p = letter_count/sum(letter_count),
         last2 = case_when(last == "n" ~ "n",
                           last == "d" ~ "d",
                           last == "y" ~ "y",
                           .default = "other"))

Before I create the plot, I cut the data into two sets: one with letters N, D and Y; and one with the rest of the letters.

ndy <- subset(last_letter_males, last2 != "other")
other <- subset(last_letter_males, last2 == "other")

And now to create the plot using {ggplot2}. Using two data sets allows me to tweak the lines for the letters N, D and Y The “Set2” palette is a good all-purpose color blind friendly palette.

library(ggplot2)
ggplot() +
  aes(x = year, y = p, group = last) +
  geom_line(data = other, alpha = 1/5) +
  geom_line(mapping = aes(color = last2), data = ndy, linewidth = 1.5) +
  scale_x_discrete(breaks = c("1900", "1950", "2000")) +
  scale_color_brewer(palette = "Set2") +
  theme_minimal() +
  labs(color = "letter")

Names ending in D and Y had their moments in the sun, but now it’s all about names ending in N.

Figure 2.9 displays trends in the concentrations of names in the top 10. This was the easiest plot to create. Again I used some {dplyr} reasoning to create the data. After I calculate proportions of counts by year and sex, I sort the data in descending order, slice off the top 10 names, and then sum the proportions by year and sex.

names_conc <- allnames_long |>
  group_by(year, sex) |> 
  mutate(p = count/sum(count)) |> 
  arrange(desc(p)) |> 
  slice_head(n = 10) |> 
  summarize(total_p = sum(p))

And here’s the plot:

ggplot(names_conc) + 
  aes(x = year, y = total_p, group = sex, color = sex) +
  geom_line() +
  scale_x_discrete(breaks = c("1900", "1950", "2000")) +
  scale_color_brewer(palette = "Set2") +
  ylim(c(0, 0.45)) +
  theme_minimal()

Pre 1900, about 40% of all boys’ names were in the top 10. Fast forward 100 years to 2000, only about 10% of names, boy or girl, are in the top 10. We see much more variability in names in the 21st century.

Hypothesis testing: section 4.6 of Regression and Other Stories

Gelman, et al. tell a story from long ago where someone sent them a fax (that’s right, a fax) asking for help with suspected voter fraud. The story is in section 4.6 (page 63) and is included to provide an example of constructing a hypothesis test. They provide data and code for this example in the Coop folder on Github. The point of this post is to document some changes I made to the code to help me understand it.

The story involves the election of a board of directors for a “residential organization”. 5553 people were allowed to vote for up to 6 people. 27 candidates were running for the board. Votes were tallied after 600 people voted, then again at 1200, 2444, 3444, 4444, and the end after all 5553 people voted. What aroused suspicion was the fact that the proportion of votes for the candidates remained steady each time the votes were tallied. According to the author of the fax: “the election was rigged…[it] is a fixed vote with fixed percentages being assigned to each and every candidate making it impossible to participate in an honest election.”

Let’s read in the data and demonstrate what they’re talking about. Notice this data is the rare CSV without column headers. The data consists of 27 rows, one for each candidate, showing cumulative vote totals.

data <- read.csv("https://raw.githubusercontent.com/avehtari/ROS-Examples/master/Coop/data/Riverbay.csv", 
                 header = FALSE)
# drop 1st and 8th columns; contain candidate names which we don't need.
votes <- data[,2:7]
head(votes)
##    V2  V3  V4   V5   V6   V7
## 1 208 416 867 1259 1610 2020
## 2  55 106 215  313  401  505
## 3 133 250 505  716  902 1129
## 4 101 202 406  589  787  976
## 5 108 249 512  745  970 1192
## 6  54  94 196  279  360  451

Now let’s calculate the proportion of votes received at each interval and create a basic line plot. Each line below represents proportion of votes received for a candidate at each of the six intervals. Notice how the lines are mostly flat. This is what prompted the emergency fax.

vote_p <- apply(votes, 2, proportions)
matplot(t(vote_p), type = "l", col = 1, lty = 1)

Gelman, et al. demonstrate this using separate plots for the top 8 vote-getters (Fig 4.5). They also divide by number of voters instead of total votes received. (Remember, each voter gets to vote for up to six people.) This simply changes the denominator, and hence, the y-axis. The steady vote patterns remain.

voters <- c(600,1200,2444,3444,4444,5553)
vote_p <- sweep(votes, 2, voters, FUN = "/")
matplot(t(vote_p), type = "l", col = 1, lty = 1)

They note that the data in this plot is not independent since proportions at times 2 and beyond include votes that came before. To address this, they create a matrix that contains number of votes received at each interval instead of cumulative totals.

interval_votes <- t(apply(votes, 1, diff))
interval_votes <- cbind(votes[,1], interval_votes)
head(interval_votes)
##           V3  V4  V5  V6  V7
## [1,] 208 208 451 392 351 410
## [2,]  55  51 109  98  88 104
## [3,] 133 117 255 211 186 227
## [4,] 101 101 204 183 198 189
## [5,] 108 141 263 233 225 222
## [6,]  54  40 102  83  81  91

After taking differences the lines still seem mostly stable.

interval_p <- apply(interval_votes, 2, proportions)
matplot(t(interval_p), type = "l", col = 1, lty = 1)

Again, the authors divide by number of voters instead of total votes to create these plots, but the result is the same with a different y-axis. Here’s how I would do the calculations and create the plot.

interval_voters <- c(600, diff(voters))
interval_p <- sweep(interval_votes, 2, interval_voters, FUN = "/")
matplot(t(interval_p), type = "l", col = 1, lty = 1)

And now comes the hypothesis test. What is the probability of seeing steady proportions like this if the votes really were coming in at random? I’ll quote the book here: “Because the concern was that the votes were unexpectedly stable as the count proceeded, we define a test statistic to summarize variability.” The test statistic in this case is the standard deviations of the sample proportions. We can quickly get these from the interval_p object we created above.

test_stat <- apply(interval_p, 1, sd)

Now we need to calculate the theoretical test statistic. For this we assume each candidate has a fixed but unknown proportion of voters who will vote for them, \(\pi_i\). Under the null, the six intervals where votes are tallied are random samples of the voters. So at each time point we can think of the proportion as a draw from a distribution with mean \(\pi_i\) and standard deviation \(\sqrt{\pi_i(1 – \pi_i)/n_t}\), where \(n_t\) is the number of voters at each interval. To calculate this, we first need to estimate \(\pi_i\) with \(p_i\), the observed proportion of votes each candidate received. This is the last column of the votes data frame divided by the total number of voters, 5553.

p_hat <- votes[,6]/5553

Then we take the average of the variances calculated at each time point and take the square root to get the theoretical test statistic.

theory_test_stat <- sapply(p_hat, function(x)sqrt(mean(x*(1-x)/interval_voters)))

Under the null, the observed test statistics should be very close to the theoretical test statistics. This is assessed in Fig 4.7 in the book. I replicate the plot as follows:

plot(x = votes[,6], y = test_stat, xlab = "total # of votes for the candidate",
     ylab = "sd of separate vote proportions")
points(x = votes[,6], y = theory_test_stat, pch = 19)

The authors note that “the actual standard deviations appear consistent with the theoretical model.”

Personally I think the plot would be a little more effective if they zoomed out a little. Some of the dramatic looking departures are only off by 0.01. For example:

plot(x = votes[,6], y = test_stat, xlab = "total # of votes for the candidate",
     ylab = "sd of separate vote proportions", ylim = c(0,0.05))
points(x = votes[,6], y = theory_test_stat, pch = 19)

Another null hypothesis approach is the chi-square test of association. Under the null, the number of votes is not associated with the interval when votes were tallied. We can run this test for each candidate and look at the p-values. If there is no association for each candidate we should see a fairly uniform scatter of p-values. On the other hand, if there was “suspiciously little variation over time” we would see a surplus of high p-values. Here’s how I carried out these calculations. I first created the 2-way tables of yes/no versus time for each candidate. I then applied the chi-square test to each table, and to that result, I extracted each p-value. A uniform QQ plot shows the p-values are mostly uniformly distributed.

tables <- apply(interval_votes, 1, function(x) rbind(x, interval_voters - x), 
      simplify = FALSE)
chisq_out <- lapply(tables, chisq.test, correct = FALSE)
p_values <- sapply(chisq_out, function(x)x$p.value)
qqplot(ppoints(27), p_values)
qqline(p_values, distribution = qunif)

Finally the authors mention that a single test on the entire 27 x 6 table could be performed. This seems like the easiest approach of all.

chisq.test(interval_votes, correct = F)
## 
##  Pearson's Chi-squared test
## 
## data:  interval_votes
## X-squared = 114.72, df = 130, p-value = 0.8279

My R code differs quite a bit from the R code provided by the authors. I’m not saying mine is better, it just makes more sense to me. Maybe someone else will find this approach useful.

Parametric Bootstrap of Kolmogorov–Smirnov Test

Zeimbekakis, et al. recently published an article in The American Statistician titled On Misuses of the Kolmogorov–Smirnov Test for One-Sample Goodness-of-Fit. One of the misues they discuss is using the KS test with parameters estimated from the sample. For example, let’s sample some data from a normal distribution.

x <- rnorm(200, mean = 8, sd = 8)
c(xbar = mean(x), s = sd(x))
##     xbar        s 
## 8.333385 7.979586

If we wanted to assess the goodness-of-fit of this sample to a normal distribution, the following is a bad way to use the KS test:

ks.test(x, "pnorm", mean(x), sd(x))
## 
##  Asymptotic one-sample Kolmogorov-Smirnov test
## 
## data:  x
## D = 0.040561, p-value = 0.8972
## alternative hypothesis: two-sided

The appropriate way to use the KS test is to actually supply hypothesized parameters. For example:

ks.test(x, "pnorm", 8, 8)
## 
##  Asymptotic one-sample Kolmogorov-Smirnov test
## 
## data:  x
## D = 0.034639, p-value = 0.9701
## alternative hypothesis: two-sided

The results of both tests are the same. We fail to reject the null hypothesis that the sample is from a Normal distribution with the stated mean and standard deviation. However, the former test is very conservative. Zeimbekakis, et al. show this via simulation. I show a simplified version of this simulation. The basic idea is that if the test were valid, the p-values would be uniformly distributed and the points in the uniform distribution QQ-plot would fall along a diagonal line. Clearly that’s not the case.

n <- 200
rout <- replicate(n = 1000, expr = {
  x <- rnorm(n, 8 , 8)
  xbar <- mean(x)
  s <- sd(x)
  ks.test(x, "pnorm", xbar, s)$p.value
})
hist(rout, main = "Histogram of p-values")

qqplot(x = ppoints(n), y = rout, main = "Uniform QQ-plot")
qqline(rout, distribution = qunif)

Conclusion: using fitted parameters in place of the true parameters in the KS test yields conservative results. The authors state in the abstract that this “has been ‘discovered’ multiple times.”

When done the right way, the KS test yields uniformly distributed p-values.

rout2 <- replicate(n = 1000, expr = {
  x <- rnorm(n, 8 , 8)
  ks.test(x, "pnorm", 8, 8)$p.value
})
hist(rout2)

qqplot(x = ppoints(n), y = rout2, main = "Uniform QQ-plot")
qqline(rout2, distribution = qunif)

Obviously it’s difficult to know which parameters to supply to the KS test. Above we knew to supply 8 as the mean and standard deviation because that’s what we used to generate the data. But what to do in real life? Zeimbekakis, et al. propose a parametric bootstrap to approximate the null distribution of the KS test statistic. The steps to implement the bootstrap are as follows:

  1. draw a random sample from the fitted distribution
  2. get estimates of parameters of random sample
  3. obtain the empirical distribution function
  4. calculate the bootstrapped KS statistic
  5. repeat steps 1 – 4 many times

Let’s do it. The following code is a simplified version of what the authors provide with the paper. Notice they use MASS::fitdistr() to obtain MLE parameter estimates. This returns the same mean for the normal distribution but a slightly smaller (i.e. biased) estimated standard deviation.

param  <- MASS::fitdistr(x, "normal")$estimate
ks <- ks.test(x, function(x)pnorm(x, param[1], param[2]))
stat <- ks$statistic
B <- 1000
stat.b <- double(B)
n <- length(x)

## bootstrapping
for (i in 1:B) {
  # (1) draw a random sample from a fitted dist
  x.b <- rnorm(n, param[1], param[2])
  # (2) get estimates of parameters of random sample
  fitted.b <- MASS::fitdistr(x.b, "normal")$estimate
  # (3) get empirical distribution function
  Fn <- function(x)pnorm(x, fitted.b[1], fitted.b[2])
  # (4) calculate bootstrap KS statistic
  stat.b[i] <- ks.test(x.b, Fn)$statistic
}
mean(stat.b >= stat)
## [1] 0.61

The p-value is the proportion of statistics greater than or equal to the observed statistic calculated with estimated parameters.

Let’s turn this into a function and show that it returns uniformly distributed p-values when used with multiple samples. Again this is a simplified version of the R code the authors generously shared with their paper.

ks.boot <- function(x, B = 1000){
  param  <- MASS::fitdistr(x, "normal")$estimate
  ks <- ks.test(x, function(k)pnorm(k, param[1], param[2]))
  stat <- ks$statistic
  stat.b <- double(B)
  n <- length(x)
  for (i in 1:B) {
    x.b <- rnorm(n, param[1], param[2])
    fitted.b <- MASS::fitdistr(x.b, "normal")$estimate
    Fn <- function(x)pnorm(x, fitted.b[1], fitted.b[2])
    stat.b[i] <- ks.test(x.b, Fn)$statistic
  }
  mean(stat.b >= stat)
}

Now replicate the function with many samples. This takes a moment to run. It took my Windows 11 PC with an Intel i7 chip about 100 seconds to run.

rout_boot <- replicate(n = 1000, expr = {
  x <- rnorm(n, 8 , 8)
  ks.boot(x)
})
hist(rout_boot)

qqplot(x = ppoints(n), y = rout_boot, main = "Uniform QQ-plot")
qqline(rout_boot, distribution = qunif)

Age Adjustment: section 2.4 of Regression and Other Stories

In section 2.4 of Regression and Other Stories, Gelman, et al. explain the necessity of age adjustment when investigating mortality rates. The book is freely available as a PDF online and the section of interest is on pages 31-33. Upon first reading, I had trouble understanding what they were doing. In particular I didn’t follow Figure 2.12. I’m aware that says more about me and than the authors. Fortunately, there’s a footnote in the book that says all data and code are available in the AgePeriodCohort folder on GitHub. Good, I thought, I’ll look at the code and figure out what’s going on. Famous last words.

The R script that performs the age adjustment is births.R. It clocks in at over 400 lines has practically no comments outside of the occasional “Sum it up.” As you run the code, you’ll find the script generates several plots not in the book. In addition, the plots that are in the book are generated in a different order. Trying to parse the R code to help me understand the exposition was frustrating. But I persisted.

Reading the bibliographic note at the end of the chapter indicated the age adjustment example was first discussed on Gelman’s blog. In the blog post he walks through the process of age adjustment, creating the same plots in the book, and provides the R code. This is basically the births.R script. He says at the end, “the code is ugly. Don’t model your code after my practices! If any of you want to make a statistics lesson out of this episode, I recommend you clean the code.”

This blog post is my statistics lesson trying to understand and clean this code.

Fig 2.11 (a)

The data apparently come from the CDC, but I’m using the data file Gelman provides with his R code. The data shows number of deaths per age per gender per year for white non-hispanics in the US. For example, the first row shows 1291 female deaths (Male = 0) in 1999 for those who were 35 years old. The total population of 35 year old women in 1999 was 1,578,829. The rate is 1291/1,578,829 x 100,000 = 81.8, or 81 deaths per 100,000.

data <- read.table("white_nonhisp_death_rates_from_1999_to_2013_by_sex.txt", 
                   header=TRUE)
head(data)
##   Age Male Year Deaths Population Rate
## 1  35    0 1999   1291    1578829 81.8
## 2  35    0 2000   1264    1528463 82.7
## 3  35    0 2001   1186    1377466 86.1
## 4  35    0 2002   1194    1333639 89.5
## 5  35    0 2003   1166    1302188 89.5
## 6  35    0 2004   1166    1325435 88.0

The first plot is mortality rate of the 45-54 age group from 1999 – 2013. We first sum both Deaths and Population by year and then calculate the Mortality Rate by dividing Deaths by Population. This is a nice opportunity to use the base R pipe operator, |>.

aggregate(cbind(Deaths, Population) ~ Year, data = data, FUN = sum, 
          subset = Age %in% 45:54) |> 
  transform(Rate = Deaths/Population) |> 
  plot(Rate ~ Year, data = _, type = "l", ylab = "Death Rate")

This is the third plot in Gelman’s blog post titled “So take the ratio!”

Fig 2.11 (b)

The second plot shows the average of the 45-54 age group increasing as the baby boomers move through. First we sum the Population by Age and Year for the 45-54 group. Then we take that data and take the mean age per year weighted by the population.

aggregate(Population ~ Age + Year, data = data, sum, 
          subset = Age %in% 45:54) |> 
  aggregate(Population ~ Year, data = _, 
            function(x)weighted.mean(45:54, x)) |> 
  plot(Population ~ Year, data = _, type = "l")

To help make this clear, let’s find the mean age of the 45-54 group in 1999. First find the population for each age in 1999:

tmp <- aggregate(Population ~ Age + Year, data = data, sum, 
          subset = Age %in% 45:54 & Year == 1999)
tmp$Population
##  [1] 3166393 3007083 2986252 2805975 2859406 2868751 2804957 3093631 2148382
## [10] 2254975

To find the mean age of the 45-54 group in 1999, we need to weight each age with the population. We can do that with the weighted.mean() function.

weighted.mean(45:54, tmp$Population)
## [1] 49.25585

The code above does this for 1999-2013. I think it’s worth noting that while the plot looks dramatic, the average age only increases from about 49.2 to 49.7. But I suppose when you’re dealing with millions of people that increase makes a difference.

This is the fourth plot in Gelman’s blog post titled “But the average age in this group is going up!”

Fig 2.11 (c)

This is where I began to struggle when reading the book.

This figure is titled “The trend in raw death rates since 2005 can be explained by age-aggregation bias”. This is the eighth plot in the blog post where it has a bit more motivation. Let’s recreate the plots in the blog post leading up to this plot.

The first plot is the sixth plot. It’s basically the previous plot rescaled as a rate. It’s created by first calculating the death rate in 1999, and then taking the weighted mean of that rate by using the total population for each age group.

dr1999 <- aggregate(cbind(Deaths, Population) ~ Age, data = data, FUN = sum, 
          subset = Age %in% 45:54 & Year == 1999) |> 
  transform(Rate = Deaths/Population) 

# Now create plot
aggregate(Population ~ Age + Year, data = data, sum, 
          subset = Age %in% 45:54) |> 
  aggregate(Population ~ Year, data = _, 
            function(x)weighted.mean(dr1999$Rate, x)) |> 
  plot(Population ~ Year, data = _, type = "l", ylab = "Reconstructed death rate")

Next he combines this plot with the plot of the raw death rate (Fig 2.11 (a)). This is the seventh plot in the blog post.

years <- 1999:2013

Raw <- aggregate(cbind(Deaths, Population) ~ Year, data = data, FUN = sum, 
          subset = Age %in% 45:54) |> 
  transform(Rate = Deaths/Population)

Expected <- aggregate(Population ~ Age + Year, data = data, sum, 
          subset = Age %in% 45:54) |> 
  aggregate(Population ~ Year, data = _, 
            function(x)weighted.mean(dr1999$Rate, x))

plot(years, Raw$Rate, type="l", ylab="Death rate for 45-54 non-Hisp whites")
lines(years, Expected$Population, col="green4")
text(2002.5, .00404, "Raw death rate", cex=.8)
text(2009, .00394, "Expected just from\nage shift", col="green4", cex=.8)

Then finally he says, “We can sharpen this comparison by anchoring the expected-trend-in-death-rate-just-from-changing-age-composition graph at 2013, the end of the time series, instead of 1999.” This means we need to calculate the death rate in 2013, and then take the weighted mean of that rate by using the total population for each age group. This is the dr2013 data frame. Then we create the same plot as above except now using the death rate in 2013.

dr2013 <- aggregate(cbind(Deaths, Population) ~ Age, data = data, FUN = sum, 
                    subset = Age %in% 45:54 & Year == 2013) |> 
  transform(Rate = Deaths/Population) 

Raw <- aggregate(cbind(Deaths, Population) ~ Year, data = data, FUN = sum, 
                 subset = Age %in% 45:54) |> 
  transform(Rate = Deaths/Population)

Expected <- aggregate(Population ~ Age + Year, data = data, sum, 
                      subset = Age %in% 45:54) |> 
  aggregate(Population ~ Year, data = _, 
            function(x)weighted.mean(dr2013$Rate, x))

plot(years, Raw$Rate,  type="l", 
     ylab="Death rate for 45-54 non-Hisp whites")
lines(years, Expected$Population, col="green4")
text(2002.5, 0.00395, "Raw death rate", cex=.8)
text(2002, .00409, "Expected just from\nage shift", col="green4", cex=.8)

Gelman notes, “since 2003, all the changes in raw death rate in this group can be explained by changes in age composition.”

Fig 2.12 (a)

This is the first plot showing age-adjusted death rates. Gelman explains this as follows in his blog post: “for each year in time, we take the death rates by year of age and average them, thus computing the death rate that would’ve been observed had the population distribution of 45-54-year-olds been completely flat each year.” The book calls it “the simplest such adjustment, normalizing each year to a hypothetical uniformly distributed population in which the number of people is equal at each age from 45 through 54.” I found this latter explanation a little confusing.

To create this plot we first sum Deaths and Populations by age and year for the 45-54 age group, then calculate the death rate, and then simply take the mean rate by year. That’s it. Gelman takes the additional step of rescaling the rate so that the rate is 1 in 1999.

aggregate(cbind(Deaths, Population) ~ Age + Year, data = data, sum, 
          subset = Age %in% 45:54) |> 
  transform(Rate = Deaths/Population) |> 
  aggregate(Rate ~ Year, data = _, mean) |> 
  transform(AA_Rate = Rate/Rate[1]) |>   # relative to 1999
  plot(AA_Rate ~ Year, data = _, type = "l", 
       ylab = "age-adjusted death rate, relative to 1999")

Fig 2.12 (b)

In the book, this plot shows two different age adjustments, even thought the exposition says there are three. They’re probably referring to the original blog post plot which does show three. I recreate the plot in the blog post, which is the second to last plot.

This plot shows (1) age-adjustment using the simple mean of rates, (i.e., the plot above), (2) age-adjustment using the distribution of ages in 1999, and (3) age-adjustment using the distribution of ages in 2013.

This plot requires the most work of all. First we need to get the total population for all ages in 1999 and 2013. These are used to make the age adjustments.

pop1999 <- aggregate(Population ~ Age, data = data, 
                     subset = Year == 1999 & Age %in% 45:54, sum)[["Population"]]
pop2013 <- aggregate(Population ~ Age, data = data, 
                     subset = Year == 2013 & Age %in% 45:54, sum)[["Population"]]

Next we calculate age-adjusted rates using the population distributions from 1999 and 2013. Again we sum Deaths and Populations by age and year for the 45-54 age group and calculate the death rate. Then we calculate the average rate by year using the population distributions to calculate a weighted mean.

# age-adjustment from Fig 2.12 (a)
aa_rate_uniform <- aggregate(cbind(Deaths, Population) ~ Age + Year, 
                             data = data, sum, 
                             subset = Age %in% 45:54) |> 
  transform(Rate = Deaths/Population) |> 
  aggregate(Rate ~ Year, data = _, mean) |> 
  transform(AA_Rate = Rate/Rate[1])

aa_rate_1999 <- aggregate(cbind(Deaths, Population) ~ Age + Year, 
                          data = data, sum,
                          subset = Age %in% 45:54) |> 
  transform(Rate = Deaths/Population) |> 
  aggregate(Rate ~ Year, data = _, function(x)weighted.mean(x, pop1999))

aa_rate_2013 <- aggregate(cbind(Deaths, Population) ~ Age + Year, 
                          data = data, sum, 
                          subset = Age %in% 45:54) |> 
  transform(Rate = Deaths/Population) |> 
  aggregate(Rate ~ Year, data = _, function(x)weighted.mean(x, pop2013))

Now we can make the plot. Notice we find the range of all the data to help set the limits of the y-axis. Also notice we rescale the plot so all lines begin at 1.

rng <- range(aa_rate_uniform$Rate/aa_rate_uniform$Rate[1], 
             aa_rate_1999$Rate/aa_rate_1999$Rate[1], 
             aa_rate_2013$Rate/aa_rate_2013$Rate[1])
plot(years, aa_rate_uniform$Rate/aa_rate_uniform$Rate[1], type = "l", ylim=rng,
     ylab = "age-adjusted death rate, relative to 1999")
lines(years, aa_rate_1999$Rate/aa_rate_1999$Rate[1], lty=2)
lines(years, aa_rate_2013$Rate/aa_rate_2013$Rate[1], lty=3)
text(2003, 1.053, "Using 1999\nage dist", cex=.8)
text(2004, 1.032, "Using 2013\nage dist", cex=.8)

The point of this plot is to demonstrate it doesn’t matter how the age-adjustment is done.

Fig 2.12 (c)

The final plot shows age adjusted death rates broken down by sex. This is basically the same code as Fig 2.12 (a) but with male included in the calls to aggregate(). To rescale the y-axis so it starts at 1 we need divide each vector of rates by the respective 1999 value.

aa_rate_sex <- aggregate(cbind(Deaths, Population) ~ Age + Year + Male, 
                         data = data, sum, 
          subset = Age %in% 45:54) |> 
  transform(Rate = Deaths/Population) |> 
  aggregate(Rate ~ Year + Male, data = _, mean) 

plot(years, aa_rate_sex$Rate[aa_rate_sex$Male == 0]/
       aa_rate_sex$Rate[aa_rate_sex$Year == 1999 & aa_rate_sex$Male == 0], 
     col="red", type = "l",
     ylab = "Death rate relative to 1999")
lines(years, aa_rate_sex$Rate[aa_rate_sex$Male == 1]/
        aa_rate_sex$Rate[aa_rate_sex$Year == 1999 & aa_rate_sex$Male == 1], 
      col="blue", type = "l")
text(2011.5, 1.075, "Women", col="red")
text(2010.5, 1.02, "Men", col="blue")

Gelman called his code “ugly”, but it’s his code and he understands it. I don’t claim my code is any better, but it’s my code and I understand it.

A note on car::deltaMethod

As we would typically estimate the success probability p with the observed success probability \(\hat{p} = \sum_iX_i/n\), we might consider using \(\frac{\hat{p}}{1 – \hat{p}}\) as an estimate of \(\frac{p}{1 – p}\) (the odds). But what are the properties of this estimator? How might we estimate the variance of \(\frac{\hat{p}}{1 – \hat{p}}\)? Moreover, how can we approximate its sampling distribution? Intuiton abandons us, and exact calculation is relatively hopeless, so we have to rely on an approximation. The Delta Method will allow us to obtain reasonable, approximate answers to our questions. (Casella and Berger, p. 240)

Most statistics books that teach the Delta Method work a few examples where they manually derive the standard error of a nonlinear function of some statistic. This requires some calculus and algebra. The result is a closed-form formula we could ostensibly we use in a function to estimate the standard error of a statistic, such as estimated odds, which is a function of an estimated proportion. I want to document how we can use the deltaMethod() function in the {car} package to do this work for us.

Casella and Berger show that the estimated standard error of the odds estimator is \(\frac{\hat{p}}{n(1 – \hat{p})^3}\) (p. 242). If we didn’t know this off hand or have a function available to us, we can use the deltaMethod() function to derive this estimator on-the-fly as we analyze data. For example, let’s say we observe 19 successes out of 30 trials, an estimated probability of about 0.63, but we want to express that as odds and obtain a confidence interval on the estimated odds.

To begin we load the {car} package. Next we need to store our probability estimate in a named vector. I gave it the name “p”. After that, we need to estimate the variance of the probability estimate, which in this case is the familiar \(\hat{p}(1 – \hat{p})/n\). Finally we use the deltaMethod() function. The first argument is our named vector containing the estimated probability. The second argument is the function of our estimate expressed as a character string. Notice this is the odds. The third argument is the estimated variance of our original estimate.

library(car)
p_hat <- c("p" = 19/30)
var_p <- p_hat*(1 - p_hat)/30
deltaMethod(p_hat, g. = "p/(1-p)", vcov. = var_p)
##           Estimate      SE   2.5 % 97.5 %
## p/(1 - p)  1.72727 0.65441 0.44466 3.0099

So our estimated odds is about 1.73 with a 95% confidence interval of [0.44, 3.01]. The reported standard error agrees with the calculation using the formula provided in Casella and Berger.

sqrt(p_hat/(30*(1 - p_hat)^3))
##         p 
## 0.6544077

In Foundations of Statistics for Data Scientists, Agresti and Kateri use the Delta Method to derive the variance of square root transformed Poisson counts. They show that the square root of a Poisson random variable with a “large mean” has an approximate standard error of 1/2. Again we can use the deltaMethod() function with data to derive this on-the-fly.

Below we simulate 10,000 observations from a Poisson distribution with mean 25. Then we estimate the mean and assign it to a named vector. Finally we use the deltaMethod() function to show the result is indeed about 1/2. Notice we simply have to provide the transformation as a character string in the second argument.

set.seed(123)
y <- rpois(10000, 25)
m <- c("m" = mean(y))
deltaMethod(m, g. = "sqrt(m)", vcov. = var(y))
##         Estimate      SE   2.5 % 97.5 %
## sqrt(m)  4.99967 0.49747 4.02465 5.9747

Of course the deltaMethod() function was really designed to take fitted model objects and estimate the standard error of functions of coefficients. See its help page for a few examples. But I wanted to show it could also be used for more pedestrian textbook examples.

References

  • Agresti, A. and Kateri, M. (2022) Foundations of Statistics for Data Scientists. CRC Press.
  • Casella, G. and Berger, R.L. (2002) Statistical Inference. 2nd Edition, Duxbury Press, Pacific Grove.
  • Fox J, Weisberg S (2019). An R Companion to Applied Regression, Third edition. Sage, Thousand Oaks CA. https://socialsciences.mcmaster.ca/jfox/Books/Companion/.
  • R Core Team (2024). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

Encoding a contrast in a linear model

This is note to myself on how to encode a contrast in a linear model.

The following data come from the text Design and Analysis of Experiments by Dean and Voss (1999). It involves the lifetime per unit cost of nonrechargeable batteries. Four types of batteries are considered:

  1. alkaline, name brand
  2. alkaline, store brand
  3. heavy duty, name brand
  4. heavy duty, store brand

We can read the data in from the textbook web site.

URL <- "https://corescholar.libraries.wright.edu/cgi/viewcontent.cgi?filename=12&article=1007&context=design_analysis&type=additional"
bat <- read.table(URL, header=TRUE)
names(bat) <- tolower(names(bat))
bat$type <- factor(bat$type)
head(bat)
##   type lpuc order
## 1    1  611     1
## 2    2  923     2
## 3    1  537     3
## 4    4  476     4
## 5    1  542     5
## 6    1  593     6

A quick look at the means suggests the alkaline store brand battery seems like the best battery for the money.

means <- tapply(bat$lpuc, bat$type, mean)
means
##      1      2      3      4 
## 570.75 860.50 433.00 496.25

We might want to make comparisons between these means. Three such comparisons are as follows:

  1. compare battery duty (alkaline vs heavy duty)
  2. compare battery brand (name brand versus store brand)
  3. compare the interaction (levels 1/4 vs levels 2/3)

These are the comparisons presented in the text (p. 171). We can make these comparisons “by hand”.

# compare battery duty
mean(means[1:2]) - mean(means[3:4])
## [1] 251
# compare battery brand
mean(means[c(1,3)]) - mean(means[c(2,4)])
## [1] -176.5
# compare interaction
mean(means[c(1,4)]) - mean(means[c(2,3)])
## [1] -113.25

We can also make these comparisons using a contrast matrix. Below we create the contrast as a matrix object and name it K. Then we use matrix multiplication to calculate the same comparisons above.

K <- matrix(c(1/2, 1/2, -1/2, -1/2,
              1/2, -1/2, 1/2, -1/2,
              1/2, -1/2, -1/2, 1/2), 
            byrow = T, ncol = 4)
K %*% means
##         [,1]
## [1,]  251.00
## [2,] -176.50
## [3,] -113.25

This particular contrast matrix is orthogonal. “Two contrasts are orthogonal if the sum of the products of corresponding coefficients (i.e. coefficients for the same means) adds to zero.” (source) We can show this using the crossprod() function.

crossprod(K[1,],K[2,])
##      [,1]
## [1,]    0
crossprod(K[2,],K[3,])
##      [,1]
## [1,]    0
crossprod(K[1,],K[3,])
##      [,1]
## [1,]    0

Obviously we would like to calculate standard errors and confidence intervals for these comparisons. One way is to fit a model using lm() and do the comparisons as a follow-up using a package such as {multcomp}. To do this we use the glht() function with our contrast, K.

library(multcomp)
m <- lm(lpuc ~ type, data = bat)
comp <- glht(m, linfct = mcp(type = K))
confint(comp)
## 
##   Simultaneous Confidence Intervals
## 
## Multiple Comparisons of Means: User-defined Contrasts
## 
## 
## Fit: lm(formula = lpuc ~ type, data = bat)
## 
## Quantile = 2.7484
## 95% family-wise confidence level
##  
## 
## Linear Hypotheses:
##        Estimate  lwr       upr      
## 1 == 0  251.0000  184.1334  317.8666
## 2 == 0 -176.5000 -243.3666 -109.6334
## 3 == 0 -113.2500 -180.1166  -46.3834

Another way is to encode the contrasts directly in our model. We can do that using the capital C() function. The only difference here is that we need to transpose the matrix. The comparisons need to be defined on the columns instead of the rows. We could use the t() function to transpose K, but we’ll go ahead and create a new matrix to make this clear.

K2 <- matrix(c(1/2, 1/2, -1/2, -1/2,
              1/2, -1/2, 1/2, -1/2,
              1/2, -1/2, -1/2, 1/2), 
            ncol = 3)

Now we refit the model using the contrast in the model formula. Notice the coefficients are the same comparisons we calculated above. The intercept is the grand mean of lpuc, (i.e. mean(bat$lpuc)).

m2 <- lm(lpuc ~ C(type, K2), data = bat)
summary(m2)
## 
## Call:
## lm(formula = lpuc ~ C(type, K2), data = bat)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -66.50 -33.56 -18.12  38.19  72.75 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    590.12      12.16  48.511 3.85e-15 ***
## C(type, K2)1   251.00      24.33  10.317 2.55e-07 ***
## C(type, K2)2  -176.50      24.33  -7.255 1.01e-05 ***
## C(type, K2)3  -113.25      24.33  -4.655 0.000556 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 48.66 on 12 degrees of freedom
## Multiple R-squared:  0.9377, Adjusted R-squared:  0.9222 
## F-statistic: 60.24 on 3 and 12 DF,  p-value: 1.662e-07

And now we can use confint() to get the confidence intervals.

confint(m2)
##                  2.5 %     97.5 %
## (Intercept)   563.6202  616.62977
## C(type, K2)1  197.9905  304.00954
## C(type, K2)2 -229.5095 -123.49046
## C(type, K2)3 -166.2595  -60.24046

These are narrower than what we got with {multcomp}. That’s because {multcomp} uses a family-wise confidence interval that adjusts for making multiple comparisons.

Another property of orthogonal contrasts is that their estimated means are uncorrelated. We can see this by calling vcov() on the fitted model object that directly uses the contrast matrix we created.

zapsmall(vcov(m2))
##              (Intercept) C(type, K2)1 C(type, K2)2 C(type, K2)3
## (Intercept)     147.9818       0.0000       0.0000       0.0000
## C(type, K2)1      0.0000     591.9271       0.0000       0.0000
## C(type, K2)2      0.0000       0.0000     591.9271       0.0000
## C(type, K2)3      0.0000       0.0000       0.0000     591.9271

We can also use the {emmeans} package to make these comparisons as follows using the original model.

library(emmeans)
emm_out <- emmeans(m, "type")
contrast(emm_out, list(c1 = c(1, 1, -1, -1)/2,
                       c2 = c(1, -1, 1, -1)/2,
                       c3 = c(1, -1, -1, 1)/2)) |>
  confint()
##  contrast estimate   SE df lower.CL upper.CL
##  c1            251 24.3 12      198    304.0
##  c2           -176 24.3 12     -230   -123.5
##  c3           -113 24.3 12     -166    -60.2
## 
## Confidence level used: 0.95

Some R Fundamentals

I recently came across the book R Programming for Bioinformatics at my local library and decided to check it out. I don’t do bioinformatics and the book is a little old (published in 2009), but I figured I would browse through it anyway. Chapter 2 is titled R Language Fundamentals. As I was flipping through it I found several little nuggets of information that I had either forgotten about over the years or never knew in the first place. I decided to document them here.

Variable names

Variable names cannot begin with a digit or underscore, and if they begin with a period they cannot be followed by a number. But we can bend these rules by quoting the names with backticks.

`_evil` <- "probably not wise"
`_evil`
## [1] "probably not wise"
`.666_number of the beast` <- sqrt(666^2)
`.666_number of the beast`
## [1] 666
rm(`_evil`, `.666_number of the beast`)

Attributes

Attributes can be attached to any R object except NULL. They can be useful for storing metadata among many other things. For example, add a source for a dataset.

d <- VADeaths
attr(d, "source") <- "Molyneaux, L., Gilliam, S. K., and Florant, L. C.(1947) Differences in Virginia death rates by color, sex, age, and rural or urban residence. American Sociological Review, 12, 525–535."

To see the source:

attr(d, "source")
## [1] "Molyneaux, L., Gilliam, S. K., and Florant, L. C.(1947) Differences in Virginia death rates by color, sex, age, and rural or urban residence. American Sociological Review, 12, 525–535."

To see all attributes of an object:

attributes(d)
## $dim
## [1] 5 4
## 
## $dimnames
## $dimnames[[1]]
## [1] "50-54" "55-59" "60-64" "65-69" "70-74"
## 
## $dimnames[[2]]
## [1] "Rural Male"   "Rural Female" "Urban Male"   "Urban Female"
## 
## 
## $source
## [1] "Molyneaux, L., Gilliam, S. K., and Florant, L. C.(1947) Differences in Virginia death rates by color, sex, age, and rural or urban residence. American Sociological Review, 12, 525–535."

To remove an attribute:

attr(d, "source") <- NULL

Not all attributes are displayed when called on an object. For example, after fitting a linear model, it appears there are only two attributes.

m <- lm(dist ~ speed, data = cars)
attributes(m)
## $names
##  [1] "coefficients"  "residuals"     "effects"       "rank"         
##  [5] "fitted.values" "assign"        "qr"            "df.residual"  
##  [9] "xlevels"       "call"          "terms"         "model"        
## 
## $class
## [1] "lm"

However, elements of the model object also have attributes. For example, the terms element has 10 attributes.

out <- attributes(m$terms)
length(out)
## [1] 10
names(out)
##  [1] "variables"    "factors"      "term.labels"  "order"        "intercept"   
##  [6] "response"     "class"        ".Environment" "predvars"     "dataClasses"
attr(m$terms, "factors")
##       speed
## dist      0
## speed     1

The colon operator

I often forget the colon operator can work with decimal values.

2.5:10.5
## [1]  2.5  3.5  4.5  5.5  6.5  7.5  8.5  9.5 10.5

And can go backwards:

10.2:1.2
##  [1] 10.2  9.2  8.2  7.2  6.2  5.2  4.2  3.2  2.2  1.2

zero length vectors

The sum of zero length vector is 0, but the product of a zero length vector is 1.

x <- numeric()
length(x)
## [1] 0
sum(x)
## [1] 0
prod(x)
## [1] 1

This is ensures expected behavior when working with sums and products:

# 12 + 0
sum(12, x)
## [1] 12
# 12 * 1
prod(12, x)
## [1] 12

.Machine

The .Machine variable holds information about the numerical characteristics of your machine. For example, the largest integer my machine can represent:

.Machine$integer.max
## [1] 2147483647

If I add 1 to that, the result is numeric, not an integer.

x <- .Machine$integer.max
x2 <- x + 1
is.integer(x2)
## [1] FALSE

If I add 1L (an explicit integer) to that, the result is a warning and a NA. My machine cannot represent that integer.

x2 <- x + 1L
## Warning in x + 1L: NAs produced by integer overflow
x2
## [1] NA

Recoding factors

There are several convenience functions in other packages for recoding variables such as recode in the {car} package, case_when in {dplyr}, and a bunch of functions in the {forcats} package. But it’s good to remember how to use base R to recode factors. Create a list with the recoding definitions and assign to the levels of the factor.

g <- sample(letters[1:5], 30, replace = TRUE)
g <- factor(g)
g
##  [1] e c d c d c b a e c a d e e d b e c b c b c b d d c b a d e
## Levels: a b c d e

Put “a” and “b” into one group, “c” and “d” into another group, and keep “e” in it’s own group.

lst <- list("A" = c("a", "b"),
            "B" = c("c", "d"),
            "C" = "e")
levels(g) <- lst
g
##  [1] C B B B B B A A C B A B C C B A C B A B A B A B B B A A B C
## Levels: A B C

If we like we can add an attribute to store the definition.

attr(g, "recoding") <- c("A = {ab}, B = {cd}, C = {e}")
g
##  [1] C B B B B B A A C B A B C C B A C B A B A B A B B B A A B C
## attr(,"recoding")
## [1] A = {ab}, B = {cd}, C = {e}
## Levels: A B C

lists can have dimensions

Something more interesting than applicable is that lists can have dimensions.

M <- as.table(rbind(c(762, 327, 468), c(484, 239, 477)))
Xsq <- chisq.test(M) # produces 9 element list
Xsq <- unclass(Xsq) # remove htest class
dim(Xsq) <- c(3,3)
Xsq
##      [,1]         [,2]                         [,3]     
## [1,] 30.07015     "Pearson's Chi-squared test" numeric,6
## [2,] 2            "M"                          table,6  
## [3,] 2.953589e-07 table,6                      table,6
Xsq[1,3]
## [[1]]
##          A        B        C
## A 703.6714 319.6453 533.6834
## B 542.3286 246.3547 411.3166

Environments

We are not restricted to creating objects in the Global Environment. We can create our own environments using the new.env() function and then create objects in that environment. We can use the dollar sign operator or the assign() function.

e1 <- new.env()
e1$mod <- lm(dist ~ speed, data = cars)
e1$cumTotal <- function(x)tail(cumsum(x), n = 1)
assign("vals", c(20, 23, 34, 19), envir = e1)
ls(e1)
## [1] "cumTotal" "mod"      "vals"
ls() # list objects in Global Environment
##  [1] "d"   "e1"  "g"   "lst" "m"   "M"   "out" "x"   "x2"  "Xsq"

We can access objects in our environment using the dollar sign operator or the get() and mget() functions.

e1$cumTotal(c(2,4,6))
## [1] 12
get("vals", envir = e1)
## [1] 20 23 34 19
mget(c("mod", "vals"), envir = e1) # get more than one object
## $mod
## 
## Call:
## lm(formula = dist ~ speed, data = cars)
## 
## Coefficients:
## (Intercept)        speed  
##     -17.579        3.932  
## 
## 
## $vals
## [1] 20 23 34 19

We can save the environment and reload it in a future session.

save(e1, file = "e1.Rdata")
rm(e1)
load(file = "e1.Rdata")

We can also change the environment associated with an object that was created in the Global Environment.

f <- function(x)(vals + 1000) # vals object defined in e1 environment
environment(f) <- e1
f
## function(x)(vals + 1000)
## <environment: 0x000001b5ec98d4e0>
f()
## [1] 1020 1023 1034 1019

Notice if we remove the environment using rm(), the function still remains in that environment and we have access to its objects

rm(e1)
f
## function(x)(vals + 1000)
## <environment: 0x000001b5ec98d4e0>
f()
## [1] 1020 1023 1034 1019

rm(e1) simply removes the binding between the symbol “e1” and structure that contains the objects. Since the environment can be reached as the environment of f(), it remains available.

Brackets and Dollar Signs

I found this sentence enlightening: “One way of describing the behavior of the single bracket operator is that the type of the return value matches the type of the value it is applied to.” (p. 28) I like this in favor of metaphors involving trains.

lst <- list(a1 = 1:5, b = c("d", "g"), c = 99)
lst["a1"] # returns a list
## $a1
## [1] 1 2 3 4 5

[[ and $ extract single values.

lst[["a1"]]
## [1] 1 2 3 4 5
lst$a1
## [1] 1 2 3 4 5

The $ operator supports partial matching.

lst$a
## [1] 1 2 3 4 5

The [ and [[ operators support expressions, but not partial matching.

ans <- "c"
lst[ans]
## $c
## [1] 99
lst[[ans]]
## [1] 99

If names are duplicated in named vectors, then only the value corresponding to the first one is returned when subsetting with brackets.

x <- c("a" = 1, "a" = 2)
x["a"]
## a 
## 1

The %in% operator can be useful to get all elements with the same name.

x[names(x) %in% "a"]
## a a 
## 1 2

Matrix indexing

I don’t work with arrays that often, but when I do I often forget that I can index them with a matrix. Below I extract the value in row 1, column 4, from each of the 3 layers of the iris3 array.

m <- matrix(c(1,4,1,
              1,4,2,
              1,4,3), 
            ncol = 3, byrow = TRUE)
iris3[m]
## [1] 0.2 1.4 2.5

Of course we can get the same result (in this case) using subsetting indices.

iris3[1,4,]
##     Setosa Versicolor  Virginica 
##        0.2        1.4        2.5

Negative subscripts

Negative subscripts can appear on the left side of assignment.

x <- 1:10
x[-(2:4)] <- 99
x
##  [1] 99  2  3  4 99 99 99 99 99 99

Subsetting without dimensions

Use empty double brackets to select all elements and not change any attributes.

x <- matrix(10:1, ncol = 2)
x
##      [,1] [,2]
## [1,]   10    5
## [2,]    9    4
## [3,]    8    3
## [4,]    7    2
## [5,]    6    1
x[] <- sort(x)
x
##      [,1] [,2]
## [1,]    1    6
## [2,]    2    7
## [3,]    3    8
## [4,]    4    9
## [5,]    5   10

rms for the rest of us

For a few years now I’ve been chipping away at Regression Modeling Strategies by Frank Harrell. Between the exposition, the case studies, the notes, the references, the code and exercises, it’s a deep and rich source of statistical learning. The book’s associated R package, {rms}, is used throughout the book and is one of the reasons it can be tough sledding at times, especially if you’re like me and learned regression using the base R functions lm() and glm(). It’s not that {rms} is hard to use. In fact it’s quite easy to use. But using functions such as summary() and anova() with {rms} models produces very different output than what you get with a base R lm() model and can seem baffling to the uninitiated. In this post I hope to explain a little of what {rms} is doing by comparing it to the more traditional approaches in R so commonly taught in classrooms and textbooks.

Model fitting and summary output

To begin, let’s load the gala data from the {faraway} package. The gala data set contains data on species diversity on the Galapagos Islands. Below we use the base R lm() function to model the number of plant Species found on the island as function of the Area of the island (km\(^2\)), the highest Elevation of the island (m), the distance from the Nearest island (km), the distance from Santa Cruz island (km), and the area of the Adjacent island (square km). This is how I and thousands of others learned to do regression in R. Of course we use the familiar summary() function on our model object to see the model coefficients, marginal tests, the residual standard error, R squared, etc.

library(faraway)
data("gala")
m <- lm(Species ~ Area + Elevation + Nearest + Scruz + Adjacent, 
        data = gala)
summary(m)
## 
## Call:
## lm(formula = Species ~ Area + Elevation + Nearest + Scruz + Adjacent, 
##     data = gala)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -111.679  -34.898   -7.862   33.460  182.584 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  7.068221  19.154198   0.369 0.715351    
## Area        -0.023938   0.022422  -1.068 0.296318    
## Elevation    0.319465   0.053663   5.953 3.82e-06 ***
## Nearest      0.009144   1.054136   0.009 0.993151    
## Scruz       -0.240524   0.215402  -1.117 0.275208    
## Adjacent    -0.074805   0.017700  -4.226 0.000297 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 60.98 on 24 degrees of freedom
## Multiple R-squared:  0.7658, Adjusted R-squared:  0.7171 
## F-statistic:  15.7 on 5 and 24 DF,  p-value: 6.838e-07

Now let’s fit the same model using the {rms} package. For this we use the ols() function. Notice we can use R’s formula syntax as usual. One additional argument we need to use that will come in handy in a few moments is x = TRUE. This stores the predictor variables with the model fit as a design matrix. Notice we don’t need to call summary() on the saved model object. We simply print it.

library(rms)
mr <- ols(Species ~ Area + Elevation + Nearest + Scruz + Adjacent, 
            data = gala, x = TRUE)
mr
## Linear Regression Model
## 
## ols(formula = Species ~ Area + Elevation + Nearest + Scruz + 
##     Adjacent, data = gala, x = TRUE)
## 
##                  Model Likelihood    Discrimination    
##                        Ratio Test           Indexes    
## Obs       30    LR chi2     43.55    R2       0.766    
## sigma60.9752    d.f.            5    R2 adj   0.717    
## d.f.      24    Pr(> chi2) 0.0000    g      105.768    
## 
## Residuals
## 
##      Min       1Q   Median       3Q      Max 
## -111.679  -34.898   -7.862   33.460  182.584 
## 
## 
##           Coef    S.E.    t     Pr(>|t|)
## Intercept  7.0682 19.1542  0.37 0.7154  
## Area      -0.0239  0.0224 -1.07 0.2963  
## Elevation  0.3195  0.0537  5.95 <0.0001 
## Nearest    0.0091  1.0541  0.01 0.9932  
## Scruz     -0.2405  0.2154 -1.12 0.2752  
## Adjacent  -0.0748  0.0177 -4.23 0.0003

The coefficient tables and summaries of residuals are the same. Likewise both return R-squared and Adjusted R-squared. The residual standard error from the lm() output is called sigma in the ols() output.

The ols output also includes a “g” statistic. This is Gini’s mean difference and measures the dispersion of predicted values. It’s an alternative to standard deviation. We can use the {Hmisc} function GiniMd to calculate Gini’s mean difference for the model fit with lm() as follows.

Hmisc::GiniMd(predict(m))
## [1] 105.7677

Whereas the summary output for lm() includes an F test for the null that all of the predictor coefficients equal 0, the ols() function reports a Likelihood Ratio Test. This tests the same null hypothesis using a ratio of likelihoods. We can calculate this test for the lm() model using the logLik() function. First we subtract the log likelihood of a model with no predictors from the log likelihood for the original model, and then multiply by 2. Apparently, according to Wikipedia, multiplying by 2 ensures mathematically that the test statistic converges to a chi-square distribution if the null is true. If the null is true, we expect this different of log likelihoods (or ratio, according to the quotient rule of logs) to be about 1. This statistic is very large. Clearly at least one of the predictor coefficients is not 0.

LRchi2 <- (logLik(m) - logLik(lm(gala$Species ~ 1)))*2
LRchi2
## 'log Lik.' 43.55341 (df=7)

If we wanted to calculate the p-value that appears in the ols() output, we can use the pchisq() function. Obviously the p-value in the ols() output is rounded.

pchisq(LRchi2, df = 7, lower.tail = FALSE)
## 'log Lik.' 2.60758e-07 (df=7)

Partial F tests and important measures

Now let’s use the anova() function on both model objects and compare the output. For the lm() object, we get sequential partial F tests using Type I sums of squares. Each line tests the null hypothesis that adding the listed predictor to the previous model without it explains no additional variability. So the Area line compares a model with just an intercept to a model with an intercept and Area. The Elevation line compares a model with an intercept and Area to a model with an intercept, Area, and Elevation. And so on. Small p-values are evidence against the null.

anova(m)
## Analysis of Variance Table
## 
## Response: Species
##           Df Sum Sq Mean Sq F value    Pr(>F)    
## Area       1 145470  145470 39.1262 1.826e-06 ***
## Elevation  1  65664   65664 17.6613 0.0003155 ***
## Nearest    1     29      29  0.0079 0.9300674    
## Scruz      1  14280   14280  3.8408 0.0617324 .  
## Adjacent   1  66406   66406 17.8609 0.0002971 ***
## Residuals 24  89231    3718                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Calling anova() on the ols() object produces a series of partial F tests using Type II sums of squares. Each line tests the null hypothesis that adding the listed predictor to a model with all the other listed predictors already in it explains no additional variability. So the Area line compares a model with all the other predictors to a model with all the other predictors and Area. The Elevation line compares a model with all the other predictors to a model with all the other predictors and Elevation. The line labeled REGRESSION is the F Test reported in the summary output for the lm() model.

anova(mr) 
##                 Analysis of Variance          Response: Species 
## 
##  Factor     d.f. Partial SS   MS           F     P     
##  Area        1   4.237718e+03 4.237718e+03  1.14 0.2963
##  Elevation   1   1.317666e+05 1.317666e+05 35.44 <.0001
##  Nearest     1   2.797576e-01 2.797576e-01  0.00 0.9932
##  Scruz       1   4.635787e+03 4.635787e+03  1.25 0.2752
##  Adjacent    1   6.640639e+04 6.640639e+04 17.86 0.0003
##  REGRESSION  5   2.918500e+05 5.837000e+04 15.70 <.0001
##  ERROR      24   8.923137e+04 3.717974e+03

To run these same Partial F tests for the lm() object we can use the base R drop1() function.

drop1(m, test = "F")
## Single term deletions
## 
## Model:
## Species ~ Area + Elevation + Nearest + Scruz + Adjacent
##           Df Sum of Sq    RSS    AIC F value    Pr(>F)    
## <none>                  89231 251.93                      
## Area       1      4238  93469 251.33  1.1398 0.2963180    
## Elevation  1    131767 220998 277.14 35.4404 3.823e-06 ***
## Nearest    1         0  89232 249.93  0.0001 0.9931506    
## Scruz      1      4636  93867 251.45  1.2469 0.2752082    
## Adjacent   1     66406 155638 266.62 17.8609 0.0002971 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The {rms} package offers a convenient plot method for anova() objects that “draws dot charts depicting the importance of variables in the model” (from the anova.rms help page). The default importance measure is the chi-square statistic for each factor minus its degrees of freedom. There are several other importance measures available. See the anova.rms help page for the what argument.

plot(anova(mr))

IQR effect plots

Earlier we noted that we don’t use the summary() function on ols() model objects to see a model summary. That doesn’t mean there isn’t a summary() method available. There is, but it does something entirely different than the summary method for lm() objects. If we try it right now we get an error message:

summary(mr)
## Error in summary.rms(mr): adjustment values not defined here or with datadist for Area Elevation Nearest Scruz Adjacent

Notice what it says: “adjustment values not defined here or with datadist”. This tells us we need to define “adjustment values” to use the summary() function. Instead of summarizing the model, the {rms} summary function when called on {rms} model objects returns a “summary of the effects of each factor”. Let’s demonstrate what this means.

The easiest way to define adjustment values is to use the datadist() function on our data frame and then set the global datadist option using the base R options() function. Harrell frequently assigns datadist results to “d” so we do the same. (Also, the help page for datadist states, “The best method is probably to run datadist once before any models are fitted, storing the distribution summaries for all potential variables.” I elected to wait for presentation purposes.)

d <- datadist(gala)
options(datadist = "d")

If we print “d” we see adjustment levels for all variables in the model.

d
##                 Species Endemics      Area Elevation Nearest   Scruz  Adjacent
## Low:effect         13.0     7.25    0.2575     97.75   0.800  11.025    0.5200
## Adjust to          42.0    18.00    2.5900    192.00   3.050  46.650    2.5900
## High:effect        96.0    32.25   59.2375    435.25  10.025  81.075   59.2375
## Low:prediction      2.0     1.45    0.0390     47.35   0.445   0.490    0.1000
## High:prediction   319.1    85.40  782.6215   1229.40  40.205 193.905  782.6215
## Low                 2.0     0.00    0.0100     25.00   0.200   0.000    0.0300
## High              444.0    95.00 4669.3200   1707.00  47.400 290.200 4669.3200

Now let’s call summary() on our {rms} model object and see what it returns.

summary(mr)
##              Effects              Response : Species 
## 
##  Factor    Low     High    Diff.   Effect     S.E.    Lower 0.95 Upper 0.95
##  Area       0.2575  59.238  58.980  -1.411900  1.3225  -4.1413     1.3176  
##  Elevation 97.7500 435.250 337.500 107.820000 18.1110  70.4400   145.2000  
##  Nearest    0.8000  10.025   9.225   0.084353  9.7244 -19.9860    20.1550  
##  Scruz     11.0250  81.075  70.050 -16.849000 15.0890 -47.9910    14.2930  
##  Adjacent   0.5200  59.238  58.718  -4.392400  1.0393  -6.5374    -2.2473

This produces an interquartile range “Effects” summary for all predictors. For example, in the first row we see the effect of Area is about -1.41. To calculate this we predict Species when Area = 59.238 (High, 75th percentile) and when Area = 0.2575 (Low, 25th percentile) and take the difference in predicted species. For each prediction, all other variables are held at their “Adjust to” level as shown above when we printed the datadist object. In addition to the effect, the standard error (SE) of the effect and a 95% confidence interval on the effect is returned. In this case we’re not sure if the effect is positive or negative.

To calculate this effect measure using our lm() model object we can use the predict() function to make two predictions and then take the difference using the diff() function. Notice all the values in the newdata argument come from the datadist object above. The two values for Area are the “Low:effect” and “High:effect” values. The other values from the “Adjust to” row.

# How to get Area effect in summary(mr) output = -1.411900
p <- predict(m, newdata = data.frame(Area = c(0.2575, 59.238), 
                                Elevation=97.75, 
                                Nearest=3.05,
                                Scruz=46.65,
                                Adjacent=2.59))
p
##        1        2 
## 26.90343 25.49153
diff(p)
##         2 
## -1.411895

To get the standard error and confidence interval we can do the following. (Recall 58.980 is the IQR of Area.)

se <- sqrt((58.980 * vcov(m)["Area","Area"] * 58.980))
se
## [1] 1.32247
diff(p) + c(-1,1) * se * qt(0.975, df = 24)
## [1] -4.141340  1.317549

This is made a little easier using the emmeans() function in the {emmeans} package. We use the at argument to specify at what values we wish to make predictions for Area. All other values are held at their median by setting cov.reduce = median. Sending the result to the {emmeans} function contrast with argument “revpairwise” says to subtract the estimated means in reverse order. Finally we pipe into the confint() to replicate the IQR effect estimate for Area that we saw in the ols() summary.

library(emmeans)
emmeans(m, specs = "Area", at = list(Area = c(0.2575, 59.238)), 
        cov.reduce = median) |>
  contrast("revpairwise") |>
  confint()
##  contrast                estimate   SE df lower.CL upper.CL
##  Area59.238 - Area0.2575    -1.41 1.32 24    -4.14     1.32
## 
## Confidence level used: 0.95

There is also a plot method for {rms} summary objects. It plots the Effects and the 90, 95, and 99 percent confidence intervals using different shades of blue. Below we see the Area effect of -1.41 with relatively tight confidence intervals hovering around 0.

plot(summary(mr))

Non-linear effects

Harrell advocates using non-linear effects in the form of regression splines. This makes a lot of sense when you pause and consider how many effects in real life are truly linear. Not many. Very few associations in nature indefinitely follow a straight line relationship. Fortunately, we can easily implement regression splines in R and specify how much non-linearity we want to entertain. We do this either in the form of degrees of freedom or knots. More of both means more non-linearity. I personally think of it as the number of times the relationship might change direction.

One way to implement regression splines in R is via the ns() function in the {splines} package, which comes installed with R. Below we fit a model that allows the effect of Nearest to change directions three times by specifying df=3. We might think of this as sort of like using a 3-degree polynomial to model Nearest (but that’s not what we’re doing). The summary shows three coefficients for Nearest, neither of which have any interpretation.

library(splines)
m2 <- lm(Species ~ Area + Elevation + ns(Nearest, df = 3) + 
           Scruz + Adjacent, 
         data = gala)
summary(m2)
## 
## Call:
## lm(formula = Species ~ Area + Elevation + ns(Nearest, df = 3) + 
##     Scruz + Adjacent, data = gala)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -79.857 -28.285  -0.775  25.498 163.069 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            13.18825   27.39386   0.481   0.6350    
## Area                   -0.03871    0.02098  -1.845   0.0785 .  
## Elevation               0.35011    0.05086   6.883 6.51e-07 ***
## ns(Nearest, df = 3)1 -168.44016   68.41261  -2.462   0.0221 *  
## ns(Nearest, df = 3)2  -56.97623   57.67426  -0.988   0.3339    
## ns(Nearest, df = 3)3   38.61120   46.22137   0.835   0.4125    
## Scruz                  -0.13735    0.19900  -0.690   0.4973    
## Adjacent               -0.08254    0.01743  -4.734   0.0001 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 55.11 on 22 degrees of freedom
## Multiple R-squared:  0.8247, Adjusted R-squared:  0.7689 
## F-statistic: 14.78 on 7 and 22 DF,  p-value: 5.403e-07

To decide whether we should keep this non-linear effect, we could use the anova() function to compare this updated more complex model to the original model with only linear effects. The null of the test is that both models are equally adequate. It appears there is some evidence that this non-linearity improves the model.

anova(m, m2)
## Analysis of Variance Table
## 
## Model 1: Species ~ Area + Elevation + Nearest + Scruz + Adjacent
## Model 2: Species ~ Area + Elevation + ns(Nearest, df = 3) + Scruz + Adjacent
##   Res.Df   RSS Df Sum of Sq      F  Pr(>F)  
## 1     24 89231                              
## 2     22 66808  2     22424 3.6921 0.04144 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

To entertain non-linear effects in {rms} use the rcs() function, which stands for restricted cubic splines. Instead of degrees of freedom we specify knots. Three degrees of freedom corresponds to four knots.

mr2 <- ols(Species ~ Area + Elevation + rcs(Nearest, 4) + 
               Scruz + Adjacent, 
             data = gala, x = TRUE)
mr2
## Linear Regression Model
## 
## ols(formula = Species ~ Area + Elevation + rcs(Nearest, 4) + 
##     Scruz + Adjacent, data = gala, x = TRUE)
## 
##                  Model Likelihood    Discrimination    
##                        Ratio Test           Indexes    
## Obs       30    LR chi2     51.32    R2       0.819    
## sigma55.9556    d.f.            7    R2 adj   0.762    
## d.f.      22    Pr(> chi2) 0.0000    g      107.069    
## 
## Residuals
## 
##     Min      1Q  Median      3Q     Max 
## -90.884 -29.870  -3.715  28.019 163.497 
## 
## 
##           Coef      S.E.      t     Pr(>|t|)
## Intercept   14.1147   30.2230  0.47 0.6451  
## Area        -0.0368    0.0212 -1.74 0.0964  
## Elevation    0.3444    0.0512  6.73 <0.0001 
## Nearest      3.6081   16.9795  0.21 0.8337  
## Nearest'  -714.5143  891.8922 -0.80 0.4316  
## Nearest'' 1001.4929 1207.4895  0.83 0.4158  
## Scruz       -0.2286    0.1978 -1.16 0.2602  
## Adjacent    -0.0806    0.0175 -4.60 0.0001

Notice we get three coefficients for Nearest that differ in value from our lm() result. A brief explanation of why that is can be found on datamethods.org. This paper provides a deeper explanation.

Calling the {rms} anova() method on the model object returns a test for nonlinearity under the Nearest test, labeled “Nonlinear”. Notice the resulting p-value is slightly higher and exceeds 0.05.

anova(mr2)
##                 Analysis of Variance          Response: Species 
## 
##  Factor     d.f. Partial SS MS         F     P     
##  Area        1     9446.295   9446.295  3.02 0.0964
##  Elevation   1   141872.398 141872.398 45.31 <.0001
##  Nearest     3    20348.931   6782.977  2.17 0.1208
##   Nonlinear  2    20348.651  10174.326  3.25 0.0580
##  Scruz       1     4182.177   4182.177  1.34 0.2602
##  Adjacent    1    66204.433  66204.433 21.14 0.0001
##  REGRESSION  7   312198.651  44599.807 14.24 <.0001
##  ERROR      22    68882.715   3131.033

To replicate the {rms} anova() output using lm(), we need to change ns() arguments. The following R code comes from this hbiostat.org page

w <- rcs(gala$Nearest, 4)
kn <- attr(w, 'parms')
m2 <- lm(Species ~ Area + Elevation + ns(Nearest, knots = kn[2:3], 
                                         Boundary.knots = c(kn[1], kn[4])) 
         + Scruz + Adjacent, data = gala)
anova(m, m2)
## Analysis of Variance Table
## 
## Model 1: Species ~ Area + Elevation + Nearest + Scruz + Adjacent
## Model 2: Species ~ Area + Elevation + ns(Nearest, knots = kn[2:3], Boundary.knots = c(kn[1], 
##     kn[4])) + Scruz + Adjacent
##   Res.Df   RSS Df Sum of Sq      F  Pr(>F)  
## 1     24 89231                              
## 2     22 68883  2     20349 3.2495 0.05801 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Which approach is better: ns() or rcs()? I don’t pretend to know and I’m not sure it really matters. The substance of the results doesn’t differ much. Yes, we saw a p-value jump from 0.04 to 0.05, but we know better than to make hard binary decisions based on a p-value.

Effect plots

Since interpreting non-linear effect coefficients is all but impossible, we turn to visualization. When fitting a non-linear model with lm(), the {effects} and {ggeffects} packages are fantastic for creating effect plots.

m2 <- lm(Species ~ Area + Elevation + ns(Nearest, df = 3) + 
           Scruz + Adjacent, gala)
library(effects)
Effect("Nearest", m2) |> plot()

library(ggeffects)
ggeffect(m2, "Nearest") |> plot()

In both cases we see the effect of Nearest is negative for values 0 – 20, but then seems to increase for values above 20.

To get a similar plot for an {rms} model, we can use the Predict() function (note the capital “P”; this is an {rms} function) and then pipe into plot(). You can also pipe into ggplot() and plotp() to get ggplot2 and plotly plots, respectively.

Predict(mr2, Nearest) |> plot() 

To get all effect plots in one go, we can use the {effects} function allEffects() with the lm() model.

allEffects(m2) |> plot()

To do the same with {rms}, we use the Predict() function with no predictors specified. Notice the y-axis has the same limits for all plots.

Predict(mr2) |> plot()

Diagnostics

A linear model makes several assumptions, two of which are constant variance of residuals and normality of residuals. The plot() method for lm() objects makes this easy to assess graphically.

op <- par(mfrow = c(2,2))
plot(m2)