ISI MStat PSB 2009 Problem 4 | Polarized to Normal

This is a very beautiful sample problem from ISI MStat PSB 2009 Problem 4. It is based on the idea of Polar Transformations, but need a good deal of observation o realize that. Give it a Try it !

Problem- ISI MStat PSB 2009 Problem 4


Let \(R\) and \(\theta\) be independent and non-negative random variables such that \(R^2 \sim {\chi_2}^2 \) and \(\theta \sim U(0,2\pi)\). Fix \(\theta_o \in (0,2\pi)\). Find the distribution of \(R\sin(\theta+\theta_o)\).

Prerequisites


Convolution

Polar Transformation

Normal Distribution

Solution :

This problem may get nasty, if one try to find the required distribution, by the so-called CDF method. Its better to observe a bit, before moving forward!! Recall how we derive the probability distribution of the sample variance of a sample from a normal population ??

Yes, you are thinking right, we need to use Polar Transformation !!

But, before transforming lets make some modifications, to reduce future complications,

Given, \(\theta \sim U(0,2\pi)\) and \(\theta_o \) is some fixed number in \((0,2\pi)\), so, let \(Z=\theta+\theta_o \sim U(\theta_o,2\pi +\theta_o)\).

Hence, we need to find the distribution of \(R\sin Z\). Now, from the given and modified information the joint pdf of \(R^2\) and \(Z\) are,

\(f_{R^2,Z}(r,z)=\frac{r}{2\pi}exp(-\frac{r^2}{2}) \ \ R>0, \theta_o \le z \le 2\pi +\theta_o \)

Now, let the transformation be \((R,Z) \to (X,Y)\),

\(X=R\cos Z \\ Y=R\sin Z\), Also, here \(X,Y \in \mathbb{R}\)

Hence, \(R^2=X^2+Y^2 \\ Z= \tan^{-1} (\frac{Y}{X}) \)

Hence, verify the Jacobian of the transformation \(J(\frac{r,z}{x,y})=\frac{1}{r}\).

Hence, the joint pdf of \(X\) and \(Y\) is,

\(f_{X,Y}(xy)=f_{R,Z}(x^2+y^2, \tan^{-1}(\frac{y}{x})) J(\frac{r,z}{x,y}) \\ =\frac{1}{2\pi}exp(-\frac{x^2+y^2}{2})\) , \(x,y \in \mathbb{R}\).

Yeah, Now it is looking familiar right !!

Since, we need the distribution of \(Y=R\sin Z=R\sin(\theta+\theta_o)\), we integrate \(f_{X,Y}\) w.r.t to \(X\) over the real line, and we will end up with, the conclusion that,

\(R\sin(\theta+\theta_o) \sim N(0,1)\). Hence, We are done !!


Food For Thought

From the above solution, the distribution of \(R\cos(\theta+\theta_o)\) is also determinable right !! Can you go further investigating the occurrence pattern of \(\tan(\theta+\theta_o)\) ?? \(R\) and \(\theta\) are the same variables as defined in the question.

Give it a try !!


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


ISI MStat PSB 2008 Problem 2 | Definite integral as the limit of the Riemann sum

This is a very beautiful sample problem from ISI MStat PSB 2008 Problem 2 based on definite integral as the limit of the Riemann sum . Let's give it a try !!

Problem- ISI MStat PSB 2008 Problem 2


For \( k \geq 1,\) let \( a_{k}=\lim {n \rightarrow \infty} \frac{1}{n} \sum_{m=1}^{kn} \exp \left(-\frac{1}{2} \frac{m^{2}}{n^{2}}\right) \)

Find \( \lim_{k \rightarrow \infty} a_{k} \) .

Prerequisites


Integration

Gamma function

Definite integral as the limit of the Riemann sum

Solution :

\( a_{k}=\lim {n \rightarrow \infty} \frac{1}{n} \sum_{m=1}^{kn} \exp \left(-\frac{1}{2} \frac{m^{2}}{n^{2}}\right) = \int_{0}^{k} e^{\frac{-y^2}{2}} dy \) , this can be written you may see in details Definite integral as the limit of the Riemann sum .

Therefore , \( lim_{k \to \infty} a_{k}= \int_{0}^{ \infty} e^{\frac{-y^2}{2}} dy \) ----(1) , let \( \frac{y^2}{2}=z \Rightarrow dy= \frac{dz}{\sqrt{2z}} \)

Substituting we get , \( \int_{0}^{ \infty} z^{\frac{1}{2} -1} e^{z} \frac{1}{\sqrt{2}} dz =\frac{ \gamma(\frac{1}{2}) }{\sqrt{2}} = \sqrt{\frac{\pi}{2}} \)

Statistical Insight

Let \( X \sim N(0,1) \) i.e X is a standard normal random variable then,

\( Y=|X| \) called folded Normal has pdf \( f_{Y}(y)= \begin{cases} \frac{2}{\sqrt{2 \pi }} e^{\frac{-x^2}{2}} & , y>0 \\ 0 &, otherwise \end{cases} \) . (Verify!)

So, from (1) we can say that \( \int_{0}^{ \infty} e^{\frac{-y^2}{2}} dy = \frac{\sqrt{2 \pi }}{2} \int_{0}^{ \infty}\frac{2}{\sqrt{2 \pi }} f_{Y}(y) dy \)

\( =\frac{\sqrt{2 \pi }}{2} \times 1 \) ( As that a PDF of folded Normal distribution ) .


Food For Thought

Find the same when \( a_{k}=\lim {n \rightarrow \infty} \frac{1}{n} \sum_{m=1}^{kn} {(\frac{m}{n})}^{5} \exp \left(-\frac{1}{2} \frac{m}{n}\right) \).


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


ISI MStat PSB 2008 Problem 3 | Functional equation

This is a very beautiful sample problem from ISI MStat PSB 2008 Problem 3 based on Functional equation . Let's give it a try !!

Problem- ISI MStat PSB 2008 Problem 3


Let \(g\) be a continuous function with \( g(1)=1 \) such that \( g(x+y)=5 g(x) g(y) \) for all \( x, y .\) Find \( g(x) \).

Prerequisites


Continuity & Differentiability

Differential equation

Cauchy's functional equation

Solution :

We are g is continuous function such that\( g(x+y)=5 g(x) g(y) \) for all \( x, y \) and g(1)=1.

Now putting x=y=0 , we get \( g(0)=5{g(0)}^2 \Rightarrow g(0)=0\) or , \(g(0)= \frac{1}{5} \) .

If g(0)=0 , then g(x)=0 for all x but we are given that g(1)=1 . Hence contradiction .

So, \(g(0)=\frac{1}{5} \) .

Now , we can write \( g'(x)= \lim_{h \to 0} \frac{g(x+h)-g(x)}{h} = \lim_{h \to 0} \frac{5g(x)g(h)-g(x)}{h} \)

\(= 5g(x) \lim_{h \to 0} \frac{g(h)- \frac{1}{5} }{ h} = 5g(x) \lim_{h \to 0} \frac{g(h)- g(0) }{ h} = 5g(x)g'(0) \) (by definition)

Therefore , \( g(x)=5g'(0)g(x)= Kg(x) \) , for some constant k ,say.

Now we will solve the differential equation , let y=g(x) then we have from above

\( \frac{dy}{dx} = ky \Rightarrow \frac{dy}{y}=k{dx} \) . Integrating both sides we get ,

\( ln(y)=kx+c \) c is integrating constant . So , we get \( y=e^{kx+c} \Rightarrow g(x)=e^{kx+c} \)

Solve the equation g(0)=1/5 and g(1)=1 to get the values of K and c . Finally we will get , \( g(x)=\frac{1}{5} e^{(ln(5)) x} =5^{x-1}\).

But there is a little mistake in this solution .

What's the mistake ?

Ans- Here we assume that g is differentiable at x=0 , which may not be true .

Correct Solution comes here!

We are given that \( g(x+y)=5 g(x) g(y) \) for all \( x, y .\) Now taking log both sides we get ,

\( log(g(x+y))=log5+log(g(x))+log(g(y)) \Rightarrow log_5 (g(x+y))=1+log_5 (g(x))+log_5 (g(y)) \)

\( \Rightarrow log_5 (g(x+y)) +1= log_5 (g(x))+1+log_5 (g(y)) +1 \Rightarrow \phi(x+y)=\phi(x)+\phi(y) \) , where \( \phi(x)=1+log_5 (g(x)) \)

It's a cauchy function as \(\phi(x)\) is also continuous . Hence , \( \phi(x)=cx \) , c is a constant \( \Rightarrow 1+log_5 (g(x))=cx \Rightarrow g(x)=5^{cx-1} \).

Now \(g(1)=1 \Rightarrow 5^{c-1}=1 \Rightarrow c=1 \).

Therefore , \(g(x)=5^{x-1} \)


Food For Thought

Let \( f:R to R \) be a non-constant , 3 times differentiable function . If \( f(1+ \frac{1}{n})=1\) for all integer n then find \( f''(1) \) .


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


ISI MStat PSB 2009 Problem 6 | abNormal MLE of Normal

This is a very beautiful sample problem from ISI MStat PSB 2009 Problem 6. It is based on the idea of Restricted Maximum Likelihood Estimators, and Mean Squared Errors. Give it a Try it !

Problem-ISI MStat PSB 2009 Problem 6


Suppose \(X_1,.....,X_n\) are i.i.d. \(N(\theta,1)\), \(\theta_o \le \theta \le \theta_1\), where \(\theta_o < \theta_1\) are two specified numbers. Find the MLE of \(\theta\) and show that it is better than the sample mean \(\bar{X}\) in the sense of having smaller mean squared error.

Prerequisites


Maximum Likelihood Estimators

Normal Distribution

Mean Squared Error

Solution :

This is a very interesting Problem ! We all know, that if the condition "\(\theta_o \le \theta \le \theta_1\), for some specified numbers \(\theta_o < \theta_1\)" had been not given, then the MLE would have been simply \(\bar{X}=\frac{1}{n}\sum_{k=1}^n X_k\), the sample mean of the given sample. But due to the restriction over \(\theta\) things get interestingly complicated.

So, simplify a bit, lets write the Likelihood Function of \(theta\) given this sample, \(\vec{X}=(X_1,....,X_n)'\),

\(L(\theta |\vec{X})={\frac{1}{\sqrt{2\pi}}}^nexp(-\frac{1}{2}\sum_{k=1}^n(X_k-\theta)^2)\), when \(\theta_o \le \theta \le \theta_1\)ow taking natural log both sides and differentiating, we find that ,

\(\frac{d\ln L(\theta|\vec{X})}{d\theta}= \sum_{k=1}^n (X_k-\theta) \).

Now, verify that if \(\bar{X} < \theta_o\), then \(L(\theta |\vec{X})\) is always a decreasing function of \(\theta\), [ where, \(\theta_o \le \theta \le \theta_1\)], Hence the maximum likelihood attains at \(\theta_o\) itself. Similarly, when, \(\theta_o \le \bar{X} \le \theta_1\), the maximum likelihood attains at \(\bar{X}\), lastly the likelihood function will be increasing, hence the maximum likelihood will be found at \(\theta_1\).

Hence, the Restricted Maximum Likelihood Estimator of \(\theta\), say

\(\hat{\theta_{RML}} = \begin{cases} \theta_o & \bar{X} < \theta_o \\ \bar{X} & \theta_o\le \bar{X} \le \theta_1 \\ \theta_1 & \bar{X} > \theta_1 \end{cases}\)

Now, to check that, \(\hat{\theta_{RML}}\) is a better estimator than \(\bar{X}\), in terms of Mean Squared Error (MSE).

Now, \(MSE_{\theta}(\bar{X})=E_{\theta}(\bar{X}-\theta)^2=\int^{-\infty}_\infty (\bar{X}-\theta)^2f_X(x)\,dx\)

\(=\int^{-\infty}_{\theta_o} (\bar{X}-\theta)^2f_X(x)\,dx+\int^{\theta_o}_{\theta_1} (\bar{X}-\theta)^2f_X(x)\,dx+\int^{\theta_1}_\infty (\bar{X}-\theta)^2f_X(x)\,dx\).

\(\ge \int^{-\infty}_{\theta_o} (\theta_o-\theta)^2f_X(x)\,dx+\int^{\theta_o}_{\theta_1} (\bar{X}-\theta)^2f_X(x)\,dx+\int^{\theta_1}_\infty (\theta_1-\theta)^2f_X(x)\,dx\)

\(=E_{\theta}(\hat{\theta_{RML}}-\theta)^2=MSE_{\theta}(\hat{\theta_{RML}})\).

Hence proved !!


Food For Thought

Now, can you find an unbiased estimator, for \(\theta^2\) ?? Okay!! now its quite easy right !! But is the estimator you are thinking about is the best unbiased estimator !! Calculate the variance and also compare weather the Variance is attaining Cramer-Rao Lowe Bound.

Give it a try !! You may need the help of Stein's Identity.


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


ISI MStat PSB 2009 Problem 3 | Gamma is not abNormal

This is a very simple but beautiful sample problem from ISI MStat PSB 2009 Problem 3. It is based on recognizing density function and then using CLT. Try it !

Problem- ISI MStat PSB 2009 Problem 3


Using and appropriate probability distribution or otherwise show that,

\( \lim\limits_{x\to\infty}\int^n_0 \frac{exp(-x)x^{n-1}}{(n-1)!}\,dx =\frac{1}{2}\).

Prerequisites


Gamma Distribution

Central Limit Theorem

Normal Distribution

Solution :

Here all we need is to recognize the structure of the integrand. Look, that here, the integrand is integrated over the non-negative real numbers. Now, event though here it is not mentioned explicitly that \(x\) is a random variable, we can assume \(x\) to be some value taken by a random variable \(X\). After all we can find randomness anywhere and everywhere !!

Now observe that the integrand has a structure which is very identical to the density function of gamma random variable with parameters \(1\) ande \(n\). So, if we assume that \(X\) is a \(Gamma(1, n)\), then our limiting integral transforms to,

\(\lim\limits_{x\to\infty}P(X \le n)\).

Now, we know that if \(X \sim Gamma(1,n)\), then its mean and variance both are \(n\).

So, as \(n \uparrow \infty\), \(\frac{X-n}{\sqrt{n}} \to N(0,1)\), by Central Limit Theorem.

Hence, \(\lim\limits_{x\to\infty}P(X \le n)=\lim\limits_{x\to\infty}P(\frac{X-n}{\sqrt{n}} \le 0)=\lim\limits_{x\to\infty}\Phi (0)=\frac{1}{2}\). [ here \(\Phi(z)\) is the cdf of Normal at \(z\).]

Hence proved !!


Food For Thought

Can, you do the proof under the "Otherwise" condition !!

Give it a try !!


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


ISI MStat PSB 2006 Problem 2 | Cauchy & Schwarz come to rescue

This is a very subtle sample problem from ISI MStat PSB 2006 Problem 2. After seeing this problem, one may think of using Lagrange Multipliers, but one can just find easier and beautiful way, if one is really keen to find one. Can you!

Problem- ISI MStat PSB 2006 Problem 2


Maximize \(x+y\) subject to the condition that \(2x^2+3y^2 \le 1\).

Prerequisites


Cauchy-Schwarz Inequality

Tangent-Normal

Conic section

Solution :

This is a beautiful problem, but only if one notices the trick, or else things gets ugly.

Now we need to find the maximum of \(x+y\) when it is given that \(2x^2+3y^2 \le 1\). Seeing the given condition we always think of using Lagrange Multipliers, but I find that thing very nasty, and always find ways to avoid it.

So let's recall the famous Cauchy-Schwarz Inequality, \((ab+cd)^2 \le (a^2+c^2)(b^2+d^2)\).

Now, lets take \(a=\sqrt{2}x ; b=\frac{1}{\sqrt{2}} ; c= \sqrt{3}y ; d= \frac{1}{\sqrt{3}} \), and observe our inequality reduces to,

\((x+y)^2 \le (2x^2+3y^2)(\frac{1}{2}+\frac{1}{3}) \le (\frac{1}{2}+\frac{1}{3})=\frac{5}{6} \Rightarrow x+y \le \sqrt{\frac{5}{6}}\). Hence the maximum of \(x+y\) with respect to the given condition \(2x^2+3y^2 \le 1\) is \(\frac{5}{6}\). Hence we got what we want without even doing any nasty calculations.

Another nice approach for doing this problem is looking through the pictures. Given the condition \(2x^2+3y^2 \le 1\) represents a disc whose shape is elliptical, and \(x+y=k\) is a family of straight parallel lines passing passing through that disc.

The disc and the line with maximum intercept.

Hence the line with the maximum intercept among all the lines passing through the given disc represents the maximized value of \(x+y\). So, basically if a line of form \(x+y=k_o\) (say), is a tangent to the disc, then it will basically represent the line with maximum intercept from the mentioned family of line. So, we just need to find the point on the boundary of the disc, where the line of form \(x+y=k_o\) touches as a tangent. Can you finish the rest and verify weather the maximum intercept .i.e. \(k_o= \sqrt{\frac{5}{6}}\) or not.


Food For Thought

Can you show another alternate solution to this problem ? No, Lagrange Multiplier Please !! How would you like to find out the point of tangency if the disc was circular ? Show us the solution we will post them in the comment.

Keep thinking !!


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


ISI MStat PSB 2007 Problem 4 | Application of Newton Leibniz theorem

This is a very beautiful sample problem from ISI MStat PSB 2007 Problem 4 based on use of Newton Leibniz theorem . Let's give it a try !!

Problem- ISI MStat PSB 2007 Problem 4


Let \( f: \mathbb{R} \rightarrow \mathbb{R}\) be a bounded continuous function. Define \( g:[0, \infty) \rightarrow \mathbb{R} \) by,
\( g(x)=\int_{-x}^{x}(2 x t+1) f(t) dt \)
Show that g is differentiable on \( (0, \infty) \) and find the derivative of g.

Prerequisites


Riemann integrability

Continuity

Newton Leibniz theorem

Solution :

As \( f: \mathbb{R} \rightarrow \mathbb{R} \) be a bounded continuous function hence the function

\( |\Phi(t)|=|(2xt+1)f(t)|=|2xt+1||f(t)|<(|2xt|+1)M<(2|x|^2+1)M \) , which is finite for a particular x so it's a riemann integrable function on t.

Now, by fundamental theorem we have g(x)=F(x)-F(-x) , where F is antiderivative of \( \Phi(t) \) .

Hence from above we can say that g(x) is differentiable function over x .
Now by Leibniz integral rule we have \( g'(x)=(2x^2+1)f(x)+f(-x)(1-2x^2) + \int_{-x}^{x} (2t)f(t) dt \).


Food For Thought

Let \( f: \mathbb{R} \rightarrow \mathbb{R}\) be a continuous function. Now, we define \(g(x)\) such that \( g(x)=f(x) \int_{0}^{x} f(t) d t \)
Prove that if g is a non increasing function, then f is identically equal to 0.


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


ISI MStat PSB 2007 Problem 3 | Application of L'hospital Rule

This is a very beautiful sample problem from ISI MStat PSB 2007 Problem 3 based on use of L'hospital Rule . Let's give it a try !!

Problem- ISI MStat PSB 2007 Problem 3


Let f be a function such that \(f(0)=0\) and f has derivatives of all order. Show that \( \lim _{h \to 0} \frac{f(h)+f(-h)}{h^{2}}=f''(0) \)
where \( f''(0)\) is the second derivative of f at 0.

Prerequisites


Differentiability

Continuity

L'hospital rule

Solution :

Let L= \( \lim _{h \to 0} \frac{f(h)+f(-h)}{h^{2}} \) it's a \( \frac{0}{0} \) form as f(0)=0 .

So , here we can use L'hospital rule as f is differentiable .

We get L= \( \lim _{h \to 0} \frac{f'(h)-f'(-h)}{2h} = \lim _{h \to 0} \frac{(f'(h)-f'(0)) -(f'(-h)-f'(0))}{2h} \)

= \( \lim _{h \to 0} \frac{f'(h)-f'(0)}{2h} + \lim _{k \to 0} \frac{f'(k)-f'(0)}{2k} \) , taking -h=k .

= \( \frac{f''(0)}{2} + \frac{f''(0)}{2} \) = \( f''(0) \) . Hence done!


Food For Thought

Let \( f:[0,1] \rightarrow[0,1] \) be a continuous function such \( f^{(n)} := f ( f ( \cdots ( f(n \text{ times} )) \) and assume that there exists a positive integer m such that \( f^{(m)}(x)=x\) for all \( x \in[0,1] .\) Prove that \( f(x)=x \) for all \( x \in[0,1] \)


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


ISI MStat PSB 2009 Problem 2 | Linear Difference Equation

This is a very beautiful sample problem from ISI MStat PSB 2009 Problem 2 based on Convergence of a sequence. Let's give it a try !!

Problem- ISI MStat PSB 2009 Problem 2


Let \( \{x_{n}: n \geq 0\} \) be a sequence of real numbers such that
\( x_{n+1}=\lambda x_{n}+(1-\lambda) x_{n-1}, n \geq 1,\) for some \( 0<\lambda<1\)
(a) Show that \( x_{n}=x_{0}+(x_{1}-x_{0}) \sum_{k=0}^{n-1}(\lambda-1)^{k} \)
(b) Hence, or otherwise, show that \( x_{n}\) converges and find the limit.

Prerequisites


Limit

Sequence

Linear Difference Equation

Solution :

(a) We are given that \( x_{n+1}=\lambda x_{n}+(1-\lambda) x_{n-1}, n \geq 1,\) for some \( 0<\lambda<1\)

So, \( x_{n+1} - x_{n} = -(1- \lambda)( x_n-x_{n-1}) \) ---- (1)

Again using (1) we have \( ( x_n-x_{n-1})= -(1- \lambda)( x_{n-1}-x_{n-2}) \) .

Now putting this in (1) we have , \( x_{n+1} - x_{n} = {(-(1- \lambda))}^2 ( x_{n-1}-x_{n-2}) \) .

So, proceeding like this we have \( x_{n+1} - x_{n} = {(-(1- \lambda))}^n ( x_{1}-x_{0}) \) for all \( n \geq 1\) and for some \( 0<\lambda<1\)---- (2)

So, from (2) we have \( x_{n} - x_{n-1} = {(-(1- \lambda))}^{n-1} ( x_{1}-x_{0}) \) , \( \cdots , (x_2-x_1)=-(\lambda-1)(x_1-x_{0}) \) and \( x_1-x_{0}=x_{1}-x_{0} \)

Adding all the above n equation we have \( x_{n}-x_{0}=(x_{1}-x_{0}) \sum_{k=0}^{n-1} {(\lambda-1)}^{k} \)

Hence , \( x_{n}=x_{0}+(x_{1}-x_{0}) \sum_{k=0}^{n-1}(\lambda-1)^{k} \) (proved ) .

(b) As we now have an explicit form of \( x_{n}=x_{0}+(x_{1}-x_{0}) \times \frac{1-{( \lambda -1)}^n}{1-(\lambda -1)} \) ----(3)

Hence from (3) we can say \( x_{n} \) is bounded and monotonic ( verify ) so , it's convergent .

Now let's take \( \lim_{n\to\infty} \) both side of (3) we get , \( \lim_{x\to\infty} x_{n} = x_{0}+(x_{1}-x_{0}) \times \frac{1}{2 - \lambda} \) .

Since , \( \lim_{x\to\infty} {( \lambda - 1)}^{n} = 0 \) as , \( -1 < \lambda -1 < 0 \) .

Food For Thought

\(\mathrm{m}\) and \(\mathrm{k}\) are two natural number and \( a_{1}, a_{2}, \ldots, a_{m}\) and \( b_{1}, b_{2}, \ldots, b_{k}\) are two sets of positive real numbers such that \( a_{1}^{\frac{1}{n}}+a_{2}^{\frac{1}{n}}+\cdots+a_{m}^{\frac{1}{n}} \) = \( b_{1}^{\frac{1}{n}}+\cdots+b_{k}^{\frac{1}{n}} \)

for all natural number \( \mathrm{n} .\) Then prove that \( \mathrm{m}=\mathrm{k}\) and \( a_{1} a_{2} \ldots a_{m}=b_{1} b_{2} . . b_{k} \) .


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Sequence and permutations | AIME II, 2015 | Question 10

Try this beautiful problem from the American Invitational Mathematics Examination I, AIME II, 2015 based on Sequence and permutations.

Sequence and permutations - AIME II, 2015


Call a permutation \(a_1,a_2,....,a_n\) of the integers 1,2,...,n quasi increasing if \(a_k \leq a_{k+1} +2\) for each \(1 \leq k \leq n-1\), find the number of quasi increasing permutations of the integers 1,2,....,7.

  • is 107
  • is 486
  • is 840
  • cannot be determined from the given information

Key Concepts


Sequence

Permutations

Integers

Check the Answer


Answer: is 486.

AIME II, 2015, Question 10

Elementary Number Theory by David Burton

Try with Hints


While inserting n into a string with n-1 integers, integer n has 3 spots where it can be placed before n-1, before n-2, and at the end

Number of permutations with n elements is three times the number of permutations with n-1 elements

or, number of permutations for n elements=3 \(\times\) number of permutations of (n-1) elements

or, number of permutations for n elements=\(3^{2}\) number of permutations of (n-2) elements

......

or, number of permutations for n elements=\(3^{n-2}\) number of permutations of {n-(n-2)} elements

or, number of permutations for n elements=2 \(\times\) \(3^{n-2}\)

forming recurrence relation as the number of permutations =2 \(\times\) \(3^{n-2}\)

for n=3 all six permutations taken and go up 18, 54, 162, 486

for n=7, here \(2 \times 3^{5} =486.\)

Header text

as

Header text

sds

Subscribe to Cheenta at Youtube