Category Archives: MLE

A Probability Problem in Heredity – Part 3

In my previous two posts I showed worked solutions to problems 2.5 and 11.7 in Bulmer’s Principles of Statistics, both of which involve the characteristics of self-fertilizing hybrid sweet peas. It turns out that problem 11.8 also involves this same topic, so why not work it as well for completeness. The problem asks us to assume that we were unable to find an explicit solution for the maximum likelihood equation in problem 11.7 and to solve it by using the following iterative method:

\( \theta_{1} = \theta_{0} + \frac{S(\theta_{0})}{I(\theta_{0})} \)

where \( S(\theta_{0}) \) is the value of \( \frac{d \log L}{d\theta}\) evaluated at \( \theta_{0}\) and \( I(\theta_{0})\) is the value of \( -E(\frac{d^{2}\log L}{d\theta^{2}})\) evaluated at \( \theta_{0}\).

So we begin with \( \theta_{0}\) and the iterative method returns \( \theta_{1}\). Now we run the iterative method again starting with \( \theta_{1}\) and get \( \theta_{2}\):

\( \theta_{2} = \theta_{1} + \frac{S(\theta_{1})}{I(\theta_{1})} \)

We repeat this process until we converge upon a value. This is called the Newton-Raphson method. Naturally this is something we would like to have the computer do for us.

First, recall our formulas from problem 11.7:

\( \frac{d \log L}{d\theta} = \frac{1528}{2 + \theta} – \frac{223}{1 – \theta} + \frac{381}{\theta} \)
\( \frac{d^{2}\log L}{d \theta^{2}} = -\frac{1528}{(2 + \theta)^{2}} -\frac{223}{(1 – \theta)^{2}} -\frac{381}{\theta^{2}} \)

Let’s write functions for those in R:

# maximum likelihood score
mls <- function(x) {
	1528/(2 + x) - 223/(1 - x) + 381/x
	}
# the information
inf <- function(x) {
	-1528/((2 + x)^2) - 223/((1 - x)^2) - 381/(x^2)
	}

Now we can use those functions in another function that will run the iterative method starting at a trial value:

# newton-raphson using expected information matrix
nr <- function(th) {
 prev <- th
 repeat {
   new <- prev + mls(prev)/-inf(prev)
   if(abs(prev - new)/abs(new) <0.0001)
     break
   prev <- new
  }
new
}	

This function first takes its argument and names it "prev". Then it starts a repeating loop. The first thing the loop does it calculate the new value using the iterative formula. It then checks to see if the difference between the new and previous value - divided by the new value - is less than 0.0001. If it is, the loop breaks and the "new" value is printed to the console. If not, the loop repeats. Notice that each iteration is hopefully converging on a value. As it converges, the difference between the "prev" and "new" value will get smaller and smaller. So small that dividing the difference by the "new" value (or "prev" value for that matter) will begin to approach 0.

To run this function, we simply call it from the console. Let's start with a value of \( \theta_{0} = \frac{1}{4}\), as the problem suggests:

nr(1/4)
[1] 0.7844304

There you go! We could make the function tell us a little more by outputting the iterative values and number of iterations. Here's a super quick and dirty way to do that:

# newton-raphson using expected information matrix
nr <- function(th) {
 k <- 1 # number of iterations
 v <- c() # iterative values
  prev <- th
  repeat {
    new <- prev + mls(prev)/-inf(prev)
    v[k] <- new
    if(abs(prev - new)/abs(new) <0.0001)
     break
    prev <- new
    k <- k + 1
    }
print(new) # the value we converged on
print(v) # the iterative values
print(k) # number of iterations
}

Now when we run the function we get this:

nr(1/4)
[1] 0.7844304
[1] 0.5304977 0.8557780 0.8062570 0.7863259 0.7844441 0.7844304
[1] 6

We see it took 6 iterations to converge. And with that I think I've had my fill of heredity problems for a while.

A Probability Problem in Heredity – Part 2

In my last post I solved a problem from chapter 2 of M.G. Bulmer’s Principles of Statistics. In this post I work through a problem in chapter 11 that is basically a continuation of the chapter 2 problem. If you take a look at the previous post, you’ll notice we were asked to find probability in terms of theta. I did it and that’s nice and all, but we can go further. We actually have data, so we can estimate theta. And that’s what the problem in chapter 11 asks us to do. If you’re wondering why it took 9 chapters to get from finding theta to estimating theta, that’s because the first problem involved basic probability and this one requires maximum likelihood. It’s a bit of a jump where statistical background is concerned.

The results of the last post were as follows:

purple-flowered red-flowered
long pollen \( \frac{1}{4}(\theta + 2)\) \( \frac{1}{4}(1 – \theta) \)
round pollen \( \frac{1}{4}(1 – \theta) \) \( \frac{1}{4}\theta \)

The table provides probabilities of four possible phenotypes when hybrid sweet peas are allowed to self-fertilize. For example, the probability of a self-fertilizing sweet pea producing a purple-flower with long pollen is \( \frac{1}{4}(1 – \theta)\). In this post we’ll estimate theta from our data. Recall that \( \theta = (1 – \pi)^{2} \), where \( \pi \) is the probability of the dominant and recessive genes of a characteristic switching chromosomes.

Here’s the data:

Purple-flowered Red-flowered
Long pollen 1528 117
Round pollen 106 381

We see from the table there are 4 exclusive possibilities when the sweet pea self-fertilizes. If we think of each possibility having its own probability of occurrence, then we can think of this data as a sample from a multinomial distribution. Since chapter 11 covers maximum likelihood estimation, the problem therefore asks us to use the multinomial likelihood function to estimate theta.

Now the maximum likelihood estimator for each probability is \( \hat{p_{i}} = \frac{x_{i}}{n} \). But we can’t use that. That’s estimating four parameters. We need to estimate one parameter, theta. So we need to go back to the multinomial maximum likelihood function and define \( p_{i}\) in terms of theta. And of course we’ll work with the log likelihood since it’s easier to work with sums than products.

\( \log L(\theta) = y_{1} \log p_{1} + y_{2} \log p_{2} + y_{3} \log p_{3} + y_{4} \log p_{4} \)

If you’re not sure how I got that, google “log likelihood multinomial distribution” for more PDF lecture notes than you can ever hope to read.

Now let’s define the probabilities in terms of one parameter, theta:

\( \log L(\theta) = y_{1} \log f_{1}(\theta) + y_{2} \log f_{2}(\theta) + y_{3} \log f_{3}(\theta) + y_{4} \log f_{4}(\theta) \)

Now take the derivative. Once we have that we can set equal to 0 and find a solution for theta. The solution will be the point at which theta obtains its maximum value:

\( \frac{d \log L(\theta)}{d\theta} = \frac{y_{1}}{f_{1}(\theta)} f’_{1}(\theta) + \frac{y_{1}}{f_{2}(\theta)} f’_{2}(\theta) + \frac{y_{1}}{f_{3}(\theta)} f’_{3}(\theta) + \frac{y_{1}}{f_{4}(\theta)} f’_{4}(\theta)\)

Time to go from the abstract to the applied with our values. The y’s are the data from our table and the functions of theta are the results from the previous problem.

\( \frac{d \log L(\theta)}{d\theta} = \frac{1528}{1/4(2 + \theta)} \frac{1}{4} – \frac{117}{1/4(1 – \theta)}\frac{1}{4} – \frac{106}{1/4(1 – \theta)}\frac{1}{4} + \frac{381}{1/4(\theta)} \frac{1}{4} \)
\( \frac{d \log L(\theta)}{d\theta} = \frac{1528}{2 + \theta} – \frac{117}{1 – \theta} – \frac{106}{1 – \theta} + \frac{381}{\theta} \)
\( \frac{d \log L(\theta)}{d\theta} = \frac{1528}{2 + \theta} – \frac{223}{1 – \theta} + \frac{381}{\theta} \)

Set equal to 0 and solve for theta. Beware, it gets messy.

\( \frac{[1528(1 – \theta)\theta] – [223(2 + \theta)\theta] + [381(2 + \theta)(1 – \theta)]}{(2 + \theta)(1- \theta)\theta} = 0\)

Yeesh. Fortunately the denominator cancels out when we start multiplying terms and solving for theta. So we’re left with this:

\( 1528\theta – 1528\theta^{2} – 446\theta – 223\theta^{2} + 762 – 381\theta – 381\theta^{2} = 0\)

And that reduces to the following quadratic equation:

\( -2132\theta^{2} + 701\theta + 762 = 0\)

I propose we use an online calculator to solve this equation. Here’s a good one. Just plug in the coefficients and hit solve to find the roots. Our coefficients are a = -2132, b = 701, and c = 762. Since it’s a quadratic equation we get two answers:

\( x_{1} = -0.4556 \)
\( x_{2} = 0.7844 \)

The answer is in terms of a probability which is between 0 and 1, so we toss the negative answer and behold our maximum likelihood estimator for theta: 0.7844.

Remember that \( \theta = (1 – \pi)^{2}\). If we solve for pi, we get \( \pi = 1 – \theta^{1/2}\), which works out to be 0.1143. That is, we estimate the probability of characteristic genes switching chromosomes to be about 11%. Therefore we can think of theta as the probability of having two parents that experienced no gene switching.

Now point estimates are just estimates. We would like to know how good the estimate is. That’s where confidence intervals come in to play. Let’s calculate one for theta.

It turns out that we can estimate the variability of our maximum likelihood estimator as follows:

\( V(\theta) = \frac{1}{I(\theta)}\), where \( I(\theta) = -E(\frac{d^{2}\log L}{d \theta^{2}}) \)

We need to determine the second derivative of our log likelihood equation, take the expected value by plugging in our maximum likelihood estimator, multiply that by -1, and then take the reciprocal. The second derivative works out to be:

\( \frac{d^{2}\log L}{d \theta^{2}} = -\frac{1528}{(2 + \theta)^{2}} -\frac{223}{(1 – \theta)^{2}} -\frac{381}{\theta^{2}} \)

The negative expected value of the second derivative is obtained by plugging in our estimate of 0.7844 and multiplying by -1. Let’s head to the R console to calculate this:

th <- 0.7844 # our ML estimate
(I <- -1 * (-1528/((2+th)^2) - 223/((1-th)^2) - 381/(th^2))) # information
[1] 5613.731

Now take the reciprocal and we have our variance:

(v.th <- 1/I)
[1] 0.0001781347

We can take the square root of the variance to get the standard error and multiply by 1.96 to get the margin of error for our estimate. Then add/subtract the margin of error to our estimate for a confidence interval:

# confidence limits on theta
0.784+(1.96*sqrt(v.th)) # upper bound
[1] 0.8101596
0.784-(1.96*sqrt(v.th)) # lower bound
[1] 0.7578404

Finally we convert the confidence interval for theta to a confidence interval for pi:

# probability of switching over
th.ub <- 0.784+(1.96*sqrt(v.th))
th.lb <- 0.784-(1.96*sqrt(v.th))
1-sqrt(th.ub) # lower bound
[1] 0.09991136
1-sqrt(th.lb) # upper bound
[1] 0.1294597

The probability of genes switching chromosomes is most probably in the range of 10% to 13%.

The variance of a maximum likelihood estimator

Maximum likelihood is one of those topics in mathematical statistics that takes a while to wrap your head around. At first glance it seems to be making something that seems easy (estimating a parameter with a statistic) into something way more complicated than it needs to be. For example, a frequent exercise is to find the maximum likelihood estimator of the mean of a normal distribution. You take the product of the n normal pdfs, take the log of that, find the first derivative, set it equal to 0 and solve. You find out that it’s \( \bar{x}\), i.e., the average. Yes, it’s good to know theoretically why we use \( \bar{x}\) to estimate the mean, but why would we use anything else? To me, way back when, it was akin to a long elaborate Rube Goldberg process to show that 2 + 2 equaled 4. I didn’t see the use of it. Of course if you stick with statistics long enough you find that maximum likelihood is indeed very useful, especially for proving results regarding efficiency and sufficiency.

Anyway, one result of maximum likelihood that baffled me for the longest time was the variance of a maximum likelihood estimator. It’s this:

$$ Var(\widehat{\theta}) \approx 1/-E(\frac{d^{2}logL}{d\theta^{2}})$$

To me this had no intuition at all. In fact it still doesn’t. However it works. Now many statistics books will go over determining the maximum likelihood estimator in painstaking detail, but then they’ll blow through the variance of the estimator in a few lines. The purpose of this post is to hopefully fill in those gaps.

It all starts with Taylor’s theorem, which says (in so many words):

$$ f(x) \approx f(a) + f'(a)(x-a) + \text{remainder}$$

We’re interested in approximating the variance, so we forget about the remainder. The rest is another (approximate) way to express \( f(x)\). In this case we have \( f(x) = \frac{dlogL}{d\theta}\) evaluated at \( \widehat{\theta}\) as our function, and \( x = \widehat{\theta}\) and \( a = \theta\). Plug in and we have:

$$ f(\widehat{\theta}) \approx \frac{dlogL}{d\theta} + \frac{d^{2}logL}{d\theta^{2}}(\widehat{\theta}-\theta)$$

The first thing to note here is that \( f(\widehat{\theta})=0\). Why? Because \( \widehat{\theta}\) is the solution to \( \frac{dlogL}{d\theta} = 0\). Plugging \( \widehat{\theta}\) into the function produces 0. Substituting that in and doing some algebra we get

$$ \widehat{\theta} \approx -\frac{dlogL}{d\theta} / \frac{d^{2}logL}{d\theta^{2}} + \theta$$

Now let’s take the variance of that expression:

$$ Var(\widehat{\theta}) \approx Var(-\frac{dlogL}{d\theta} / \frac{d^{2}logL}{d\theta^{2}} + \theta)$$

Wait, hold up. \( \frac{d^{2}logL}{d\theta^{2}}\) is the second derivative evaluated at \( \theta \). We don’t know \( \theta \). We had to approximate it with \( \widehat{\theta}\). So let’s substitute in the expected value, like so:

$$ Var(-\frac{dlogL}{d\theta} / E(\frac{d^{2}logL}{d\theta^{2}}) + \theta)$$

Now take the variance:

$$ Var(\widehat{\theta}) \approx 1/(E(\frac{d^{2}logL}{d\theta^{2}}))^{2} \times Var(\frac{dlogL}{d\theta} + \theta)$$

OK, getting closer. It’s starting to look like the result I showed at the beginning. We still need to find \( Var(\frac{dlogL}{d\theta} + \theta)\). Actually we just need to find \( Var(\frac{dlogL}{d\theta})\) since \( \theta \) is a constant and does not impact the variance calculation.

First let’s recall the useful formula \( Var(x) = E(x^{2}) – (E(x))^{2}\). Let’s use that here:

$$ Var(\frac{dlogL}{d\theta}) = E((\frac{dlogL}{d\theta})^{2}) – (E(\frac{dlogL}{d\theta}))^{2}$$

Now it turns out that the \( E(\frac{dlogL}{d\theta}) = 0\). A quick proof can be found on p. 202 of Bulmer’s Principles of Statistics. I would happily reproduce it here but I think it detracts from my goal. So believe me. \( E(\frac{dlogL}{d\theta}) = 0\). That changes our formula to

$$ Var(\frac{dlogL}{d\theta}) = E((\frac{dlogL}{d\theta})^{2}) $$

Once again, we have to back up and make yet another observation. Notice the following:

$$ \frac{d^{2}logL}{d\theta^{2}} = \frac{d}{d\theta}(\frac{1}{L}\frac{dL}{d\theta})=-\frac{1}{L^{2}}(\frac{dL}{d\theta})^{2} + \frac{1}{L}\frac{d^{2}L}{d\theta^{2}}$$

Recall that \( \frac{d log L}{d\theta}=\frac{1}{L}\frac{dL}{d\theta}\). Rearrange and we get \( \frac{dL}{d\theta}=\frac{d log L}{d\theta}L\). Substitute that in to our previous formula:

$$ \frac{d^{2}logL}{d\theta^{2}} =-\frac{1}{L^{2}}(\frac{d log L}{d\theta}L)^{2} + \frac{1}{L}\frac{d^{2}L}{d\theta^{2}}$$

$$ \frac{d^{2}logL}{d\theta^{2}} =-(\frac{d log L}{d\theta})^{2} + \frac{1}{L}\frac{d^{2}L}{d\theta^{2}}$$

Now rearrange as follows and see what we have:

$$ -\frac{d^{2}logL}{d\theta^{2}} + \frac{1}{L}\frac{d^{2}L}{d\theta^{2}} =(\frac{d log L}{d\theta})^{2} $$

Look at our variance formula we were working on:

$$ Var(\frac{dlogL}{d\theta}) = E((\frac{dlogL}{d\theta})^{2}) $$

See where we can make the substitution? Let’s do it:

$$ Var(\frac{dlogL}{d\theta}) = E(-\frac{d^{2}logL}{d\theta^{2}} + \frac{1}{L}\frac{d^{2}L}{d\theta^{2}}) $$

The expected value of the second term is 0 for the same reason that \( E(\frac{dlogL}{d\theta}) = 0\). Take my word for it. That leaves us with…

$$ Var(\frac{dlogL}{d\theta}) = E(-\frac{d^{2}logL}{d\theta^{2}}) = -E(\frac{d^{2}logL}{d\theta^{2}})$$

OK, we’re ALMOST THERE! Now bring back the full variance expression we had earlier…

$$ Var(\widehat{\theta}) \approx 1/(E(\frac{d^{2}logL}{d\theta^{2}}))^{2} \times Var(\frac{dlogL}{d\theta} + \theta)$$

…and plug what we just found:

$$ Var(\widehat{\theta}) \approx 1/(E(\frac{d^{2}logL}{d\theta^{2}}))^{2} \times -E(\frac{d^{2}logL}{d\theta^{2}})$$

Do the cancellation and we get the final reduced expression for the variance of the maximum likelihood estimator:

$$ Var(\widehat{\theta}) \approx 1/-E(\frac{d^{2}logL}{d\theta^{2}})$$