In my previous two posts I showed worked solutions to problems 2.5 and 11.7 in Bulmer’s *Principles of Statistics*, both of which involve the characteristics of self-fertilizing hybrid sweet peas. It turns out that problem 11.8 *also* involves this same topic, so why not work it as well for completeness. The problem asks us to assume that we were unable to find an explicit solution for the maximum likelihood equation in problem 11.7 and to solve it by using the following iterative method:

\( \theta_{1} = \theta_{0} + \frac{S(\theta_{0})}{I(\theta_{0})} \)

where \( S(\theta_{0}) \) is the value of \( \frac{d \log L}{d\theta}\) evaluated at \( \theta_{0}\) and \( I(\theta_{0})\) is the value of \( -E(\frac{d^{2}\log L}{d\theta^{2}})\) evaluated at \( \theta_{0}\).

So we begin with \( \theta_{0}\) and the iterative method returns \( \theta_{1}\). Now we run the iterative method again starting with \( \theta_{1}\) and get \( \theta_{2}\):

\( \theta_{2} = \theta_{1} + \frac{S(\theta_{1})}{I(\theta_{1})} \)

We repeat this process until we converge upon a value. This is called the Newton-Raphson method. Naturally this is something we would like to have the computer do for us.

First, recall our formulas from problem 11.7:

\( \frac{d \log L}{d\theta} = \frac{1528}{2 + \theta} – \frac{223}{1 – \theta} + \frac{381}{\theta} \)

\( \frac{d^{2}\log L}{d \theta^{2}} = -\frac{1528}{(2 + \theta)^{2}} -\frac{223}{(1 – \theta)^{2}} -\frac{381}{\theta^{2}} \)

Let’s write functions for those in R:

# maximum likelihood score mls <- function(x) { 1528/(2 + x) - 223/(1 - x) + 381/x } # the information inf <- function(x) { -1528/((2 + x)^2) - 223/((1 - x)^2) - 381/(x^2) }

Now we can use those functions in another function that will run the iterative method starting at a trial value:

# newton-raphson using expected information matrix nr <- function(th) { prev <- th repeat { new <- prev + mls(prev)/-inf(prev) if(abs(prev - new)/abs(new) <0.0001) break prev <- new } new }

This function first takes its argument and names it "prev". Then it starts a repeating loop. The first thing the loop does it calculate the new value using the iterative formula. It then checks to see if the difference between the new and previous value - divided by the new value - is less than 0.0001. If it is, the loop breaks and the "new" value is printed to the console. If not, the loop repeats. Notice that each iteration is hopefully converging on a value. As it converges, the difference between the "prev" and "new" value will get smaller and smaller. So small that dividing the difference by the "new" value (or "prev" value for that matter) will begin to approach 0.

To run this function, we simply call it from the console. Let's start with a value of \( \theta_{0} = \frac{1}{4}\), as the problem suggests:

nr(1/4) [1] 0.7844304

There you go! We could make the function tell us a little more by outputting the iterative values and number of iterations. Here's a super quick and dirty way to do that:

# newton-raphson using expected information matrix nr <- function(th) { k <- 1 # number of iterations v <- c() # iterative values prev <- th repeat { new <- prev + mls(prev)/-inf(prev) v[k] <- new if(abs(prev - new)/abs(new) <0.0001) break prev <- new k <- k + 1 } print(new) # the value we converged on print(v) # the iterative values print(k) # number of iterations }

Now when we run the function we get this:

nr(1/4) [1] 0.7844304 [1] 0.5304977 0.8557780 0.8062570 0.7863259 0.7844441 0.7844304 [1] 6

We see it took 6 iterations to converge. And with that I think I've had my fill of heredity problems for a while.