The data distribution is:
$$
X \sim Beta(1, \theta) \\
$$
The likelihood function is:
\begin{align} L(\theta | X_1, ..., X_n) &= \prod_{i = 1}^{n} \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)}X_i^{\alpha - 1}(1 - X_i)^{\beta - 1} \\
&= \prod_{i = 1}^{n} \frac{\Gamma(1 + \theta)}{\Gamma(1)\Gamma(\theta)}X_i^{1 - 1}(1 - X_i)^{\theta - 1} \\
&= \prod_{i = 1}^{n} \frac{\Gamma(1 + \theta)}{\Gamma(1)\Gamma(\theta)}X_i^{1 - 1}(1 - X_i)^{\theta - 1} \\
&= \prod_{i = 1}^{n} \frac{\Gamma(1 + \theta)}{\Gamma(1)\Gamma(\theta)}X_i^{0}(1 - X_i)^{\theta - 1} \\
&= \prod_{i = 1}^{n} \frac{\Gamma(1 + \theta)}{\Gamma(1)\Gamma(\theta)} (1 - X_i)^{\theta - 1} \\
\end{align}
Note that \(\Gamma(1) = (1 - 1)! = 0! = 1\),
and \(\frac{\Gamma(1 + \theta)}{\Gamma(\theta)} = \frac{(1 + \theta - 1)!}{(\theta - 1)!} = \frac{(\theta)!}{(\theta - 1)!}= \theta \),
so \( \frac{\Gamma(1 + \theta)}{\Gamma(1)\Gamma(\theta)} = \theta \).
\begin{align}
L(\theta | X_1, ..., X_n) &= \prod_{i = 1}^{n} \frac{\Gamma(1 + \theta)}{\Gamma(1)\Gamma(\theta)}(1 - X_i)^{\theta - 1} \\
&= \prod_{i = 1}^{n} \theta (1 - X_i)^{\theta - 1} \\
&= \theta^n \prod_{i = 1}^{n} (1 - X_i)^{\theta - 1} \\
\end{align}
The log-likelihood function is:
\begin{align}
l(\theta | X_1, ..., X_n) &= nlog(\theta) + (\theta - 1)\sum_{i = 1}^{n}log(1 - X_i) \\
&= nlog(\theta) + \theta \sum_{i = 1}^{n}log(1 - X_i) - \sum_{i = 1}^{n}log(1 - X_i) \\
\end{align}
Note that the \(\theta\) does not appear in the last term, so the partial derivative with respect to \(\theta\) will have only two terms.
The first partial derivative of the log-likelihood function is:
$$
\frac{\partial l}{\partial \theta} = \frac{n}{\theta} + \sum_{i = 1}^{n}log(1 - X_i) \\
$$
The maximum likelihood estimator for \(\theta\) is obtained by setting this equal to zero and then solving for \(\theta\):
\begin{align}
0 &= \frac{n}{\theta} + \sum_{i = 1}^{n}log(1 - X_i) \\
-\frac{n}{\theta} &= \sum_{i = 1}^{n}log(1 - X_i) \\
-n &= \theta \sum_{i = 1}^{n}log(1 - X_i) \\
\hat{\theta} &= \frac{-n}{\sum_{i = 1}^{n}log(1 - X_i)} \\
\end{align}
The second partial derivative with respect to \(\theta\) of the log-likelihood function is:
$$
\frac{\partial^{2} l}{\partial \theta^2} = -\frac{n}{\theta^2} \\
$$
The Fisher Information is:
$$
I(\theta) = -E\Bigg[\frac{\partial^{2} l}{\partial \theta^2}\Bigg] = -E\Bigg[-\frac{n}{\theta^2}\Bigg] = \frac{n}{\theta^2}
$$
The variance of the maximum likelihood estimator is:
$$
V[\hat{\theta}] = \frac{1}{I(\theta)} = \frac{\theta^2}{n}
$$
No comments:
Post a Comment