Fork me on GitHub
Math for the people, by the people.

User login

score function

Defines: 
likelihood equation
Synonym: 
score, score statistic
Type of Math Object: 
Definition
Major Section: 
Reference
Groups audience: 

Mathematics Subject Classification

62A01 no label found

Comments

It says we evaluate expectation of the score function, and set it to zero. But which distribution is expectation evaluated with respect to?

If we have a Bernoulli variable and observe a heads and b tails,
then

log-likelihood: l(t)=a log(t) + b log(1-t)
score function: U(t) = a/t + b/(1-t)

maximum likelihood solution is a/(a+b) which is the solution of U(t)=0

So it looks like here we are setting the score function to 0 directly, and not it's expectation. Am I missing something here?

First, for a given random variable, say X, and its prob density function f(X;t) where t is a nuisance parameter, we find the log-likelihood function (with respect to X). Then, the score function is calculated by taking derivative with respect to the nuisance parameter t. Finally, the expectation of the score function with respet to the random variable X is calculated. This is then set to zero and solve for t, in terms of E(X).

So, in your example, we are interested in the MLE of t in a Binomial random variable, with density function f(X;t) = (n choose x) t^X (1-t)^(n-X). Taking the natural log and then the first partial derivative with respect to t, you have
U(t) = X/t - (n-X)/(1-t).
Set E(U)=0, we have 0 = E(X)/t-(n-E(X))/(1-t). Now solve for t, in terms of E(X) to get t = E(X)/n. Given the experimental result, n = a+b, and the sample expectation of X is a. So, the MLE of t = E(X)/n = a/(a+b).

I will add an example (perhaps this one) to my entry to clarify...

Chi

Hi,

I think the expection of the score function will always be zero, only the variance is affected by the parameter.

I think the likelihood equations should be "score(\theta) = 0", not "expect(score(\theta)) = 0". If you do the integration necessary in computing the expectation, you end up differentiating a constant and thus get zero.

Am I wrong?

Thanks,

Greg

You're right. Thanks for pointint it out. How does it look now?

Chi

Much better :)

Perhaps "The maximum likelihood estimate (MLE) of" would be better than just "MLE \theta".

Also, and this is probably just pedantism, you can probably form the likelihood equations by setting the gradient of the likelihood function to zero (it doesn't have to be the log-likelihood), since ln is monotonic. This may, in some strange cases, be easier to differentiate I suppose.

Just out of interest, it's obvious that the point won't be a minimum, but I suppose it could be a point of inflection, so perhaps there is a need to check this before declaring \theta a MLE.

Greg

Right again Greg! When I created this entry, I had pdf's from the exponential families in my mind. So naturally the log-likelihood functions are easier to use. Also, I was trying to tie the likelihood equations to the score function.

Go ahead, you should be able to edit the entry now as well.

Chi

Thanks Chi. I hope the corrections are OK.

Greg

Subscribe to Comments for "score function"