This problem is a beautiful example when the maximum likelihood estimator is same as the method of moment estimator. Infact, we have proposed a general problem, is when exactly, they are equal? This is from ISI MStat 2016 PSB Problem 7, Stay Tuned.
Let \(X_{1}, X_{2}, \ldots, X_{n}\) be independent and identically distributed random variables ~ \(X\) with probability mass function
$$
f(x ; \theta)=\frac{x \theta^{x}}{h(\theta)} \quad \text { for } x=1,2,3, \dots
$$
where \(0<\theta<1\) is an unknown parameter and \(h(\theta)\) is a function of \(\theta\) Show that the maximum likelihood estimator of \(\theta\) is also a method of moments estimator.
This \(h(\theta)\) looks really irritating.
\( \sum_{x = 1}^{\infty} f(x ; \theta) = \sum_{x = 1}^{\infty} \frac{x \theta^{x}}{h(\theta)} = 1 \)
\( \Rightarrow h(\theta) = \sum_{x = 1}^{\infty} {x \theta^{x}} \)
\( \Rightarrow (1 - \theta) \times h(\theta) = \sum_{x = 1}^{\infty} {\theta^{x}} = \frac{\theta}{1 - \theta} \Rightarrow h(\theta) = \frac{\theta}{(1 - \theta)^2}\).
\( L(\theta)=\prod_{i=1}^{n} f\left(x_{i} | \theta\right) \)
\( l(\theta) = log(L(\theta)) = \sum_{i=1}^{n} \log \left(f\left(x_{i} | \theta\right)\right) \)
Note: All irrelevant stuff except the thing associated with \( \theta \) is kept as constant (\(c\)).
\( \Rightarrow l(\theta) = c + n\bar{X}log(\theta) - nlog(h(\theta)) \)
\( l^{\prime}(\theta) = 0 \overset{Check!}{\Rightarrow} \hat{\theta}_{mle} = \frac{\bar{X} -1}{\bar{X} +1}\)
We need to know the \( E(X)\).
\( E(X) = \sum_{x = 1}^{\infty} xf(x ; \theta) = \sum_{x = 1}^{\infty} \frac{x^2 \theta^{x}}{h(\theta)} \).
\( E(X)(1 - \theta) = \sum_{x = 1}^{\infty} \frac{(2x-1)\theta^{x}}{h(\theta)} \).
\( E(X)\theta(1 - \theta) = \sum_{x = 1}^{\infty} \frac{(2x-1)\theta^{x+1}}{h(\theta)} \)
\( E(X)((1 - \theta) - \theta(1 - \theta)) =\frac{\sum_{x = 1}^{\infty} 2\theta^{x} - \theta }{h(\theta)} = \frac{\theta(1 + \theta)}{(1 - \theta)h(\theta)}\).
\( \Rightarrow E(X) = \frac{\theta(1 + \theta)}{(1 - \theta)^3h(\theta)} = \frac{1+\theta}{1-\theta}.\)
\( E(X) = \bar{X} \Rightarrow \frac{1+\theta_{mom}}{1-\theta_{mom}}= \bar{X} \Rightarrow \hat{\theta}_{mom} = \frac{\bar{X} -1}{\bar{X} +1}\)
Normal (unknown mean and variance), exponential, and Poisson all have sufficient statistics equal to their moments and have MLEs and MoM estimators the same (not strictly true for things like Poisson where there are multiple MoM estimators).
So, when do you think, the Method of Moments Estimator = Maximum Likelihood Estimator?
Pitman Kooper Lemma tells us that it is an exponential family.
Also, you can prove that that there exists a specific form of the exponential family.
Stay tuned for more exciting such stuff!

In 2025, 8 students from Cheenta Academy cracked the prestigious Regional Math Olympiad. In this post, we will share some of their success stories and learning strategies. The Regional Mathematics Olympiad (RMO) and the Indian National Mathematics Olympiad (INMO) are two most important mathematics contests in India.These two contests are for the students who are […]

Cheenta Academy proudly celebrates the success of 27 current and former students who qualified for the Indian Olympiad Qualifier in Mathematics (IOQM) 2025, advancing to the next stage — RMO. This accomplishment highlights their perseverance and Cheenta’s ongoing mission to nurture mathematical excellence and research-oriented learning.

Cheenta students shine at the Purple Comet Math Meet 2025 organized by Titu Andreescu and Jonathan Kanewith top national and global ranks.

Celebrate the success of Cheenta students in the Stanford Math Tournament. The Unified Vectors team achieved Top 20 in the Team Round.