Maximum likelihood is one of those topics in mathematical statistics that takes a while to wrap your head around. At first glance it seems to be making something that seems easy (estimating a parameter with a statistic) into something way more complicated than it needs to be. For example, a frequent exercise is to find the maximum likelihood estimator of the mean of a normal distribution. You take the product of the n normal pdfs, take the log of that, find the first derivative, set it equal to 0 and solve. You find out that it’s , i.e., the average. Yes, it’s good to know theoretically why we use to estimate the mean, but why would we use anything else? To me, way back when, it was akin to a long elaborate Rube Goldberg process to show that 2 + 2 equaled 4. I didn’t see the use of it. Of course if you stick with statistics long enough you find that maximum likelihood is indeed very useful, especially for proving results regarding efficiency and sufficiency.
Anyway, one result of maximum likelihood that baffled me for the longest time was the variance of a maximum likelihood estimator. It’s this:
To me this had no intuition at all. In fact it still doesn’t. However it works. Now many statistics books will go over determining the maximum likelihood estimator in painstaking detail, but then they’ll blow through the variance of the estimator in a few lines. The purpose of this post is to hopefully fill in those gaps.
It all starts with Taylor’s theorem, which says (in so many words):
We’re interested in approximating the variance, so we forget about the remainder. The rest is another (approximate) way to express . In this case we have evaluated at as our function, and and . Plug in and we have:
The first thing to note here is that . Why? Because is the solution to . Plugging into the function produces 0. Substituting that in and doing some algebra we get
Now let’s take the variance of that expression:
Wait, hold up. is the second derivative evaluated at . We don’t know . We had to approximate it with . So let’s substitute in the expected value, like so:
Now take the variance:
OK, getting closer. It’s starting to look like the result I showed at the beginning. We still need to find . Actually we just need to find since is a constant and does not impact the variance calculation.
First let’s recall the useful formula . Let’s use that here:
Now it turns out that the . A quick proof can be found on p. 202 of Bulmer’s Principles of Statistics. I would happily reproduce it here but I think it detracts from my goal. So believe me. . That changes our formula to
Once again, we have to back up and make yet another observation. Notice the following:
Recall that . Rearrange and we get . Substitute that in to our previous formula:
Now rearrange as follows and see what we have:
Look at our variance formula we were working on:
See where we can make the substitution? Let’s do it:
The expected value of the second term is 0 for the same reason that . Take my word for it. That leaves us with…
OK, we’re ALMOST THERE! Now bring back the full variance expression we had earlier…
…and plug what we just found:
Do the cancellation and we get the final reduced expression for the variance of the maximum likelihood estimator: