Final Prep. Mathematical Statistics
Final Exam was Tuesday March 15th 2022 from 10:15am-12:05pm.
💖 Method of Distribution Function (CDF)
Given the density function \(f(y)\) the distribution function \[F_Y(y)=\int_{\text{lower}}^{y}f(t)dt=F_Y(t)|_{t=\text{lower}}^{t=y}=F_Y(y)\]
For \(U=\) ❤️, when y = lower, u = 💓 and when y = upper, u = 💗.
By definition of the CDF, the CDF of U is,
\(F_U(u)=P(U\leq u)=P(\)❤️\(\leq u)=P(Y\leq\)💙\()=F_Y(\)💙\()=\)💜
\[F_U(u)=\begin{cases}0 &\quad\quad\quad\quad u<\text{💓}\\ \text{💜} & \text{💓}\leq u\leq \text{💗}\\ 1 & \text{💗}\leq u\end{cases}\]
- Since \(f_U(u)=F'_U(u)\),
\(f_U(u)=\frac{d}{du}(F_U(u))=\)💚
The complete pdf of U is,
\[f_U(u)=\begin{cases}\text{💚} & \text{💓}<u<\text{💗}\\0 & \text{otherwise}\end{cases}\]
🐥 Jacobian (Univariate)
Graph U (Y,u) to verify g is monotone and increasing.
\(U=g(Y)\) where \(g(y)=U\). Since g is a monotinc increasing function we can apply the Transformation Method.
The inverse of g is \(h(u)=g^{-1}(y)=\)🥚 for 🐣\(\leq u\leq\)🐔, and \(h'(u)=\) 🐤.
The pdf of U is then, \[f_U(u)=f_Y(h(u))\cdot |h'(u)|=f_Y(🥚)\cdot|\text{🐤}|=\text{🐥}\]
And the complete pdf is,
\[f_U(u)=\begin{cases}\text{🐥} & \text{🐣}<u<\text{🐔}\\0 & \text{otherwise}\end{cases}\]
Note:
- \(y^2\) is monotonic on \(0\leq y\)
🤘 MGF

The mgf of Y is \(M_Y(t)=\)👊\(=E_Y(t)=E(e^{ty})\).
By definition of the mgf, the mgf of U is, \[M_U(t)=E(e^{tU})=\text{something with the form of }E(e^{ty})=\text{✋ (similar to known distribution)}\]
Therefore, mgf U has the form of \(\underline{\text{a known distribution}}\) with parameters 👇 and 👆.
Note: (Bivariate)
- Let \(Y_i\) be the ith \(\underline{\quad\quad\quad}\) for \(i=1,2\).
🌙 Jacobian (bivariate)
The pdf of \(Y_1\) is \(f(y_1)=\)🌗 and the pdf of \(Y_2\) is \(f(y_2)=\)🌓.
Since \(Y_1\) and \(Y_2\) are independent, their joint pdf is \(f_{Y_1,Y_2}(y_1,y_2)=f(y_1)f(y_2)=\text{🌗🌓}=\text{🌝}\)
Let \(U_1=\)🏃\(=h_1(y_1,y_2)\) and \(U_2=\)🌎\(=h_2(y_1,y_2)\). Then, \(y_1=...=\text{⛵}=h_1^{-1}(u_1,u_2)\) and \(y_2=...=\text{🌊}=h_2^{-1}(u_1,u_2)\).
Then the Jacobian is, \[J=\text{det}\begin{bmatrix}\frac{\partial y_1}{\partial u_1} & \frac{\partial y_1}{\partial u_2}\\ \frac{\partial y_2}{\partial u_1} & \frac{\partial y_2}{\partial u_2}\end{bmatrix}=\text{⚓}\]
Recall: det\(|\begin{smallmatrix}a & b \\ c & d\end{smallmatrix}|=(ad)-(bc)\)
Therefore the joint pdf of \(U_1\) and \(U_2\) is \[f_{U_1,U_2}=(u_1,u_2)=f_{Y_1,Y_2}(h_1^{-1}(u_1,u_2),h_2^{-1}(u_1,u_2))\times|J|=\text{🌜⛵🌊🌛}\times|\text{⚓}|\]
🌺 Normal Sample
Find the probability that the sample of \(n=\)🐝 will be within X=🍯 of the population mean. Given \(\mu=\)🌻 and \(\sigma^2=\)🌿
Let \(\overline{Y}\) be the mean of \(\underline{\quad\text{(words)}\quad}\) of \(\underline{\quad\text{🐝 }\quad}\) \(\underline{\quad\text{(words)}\quad}\).
We want to find \(P(|\overline{Y}-\mu|\leq X)=P(|\overline{Y}-\text{🌻}|\leq \text{🍯})=P(-\text{🍯}\leq \overline{Y}-\text{🌻}\leq \text{🍯})\).
Since the population is normally distributed with mean \(=\mu\) and variance \(\frac{\sigma^2}{\sqrt{n}}=\frac{🌿}{\sqrt{\text{🐝}}}=\)🌾. Then, \[P(-Z\leq\overline{Y}-\mu\leq Z)=P(\frac{-\text{🍯}}{\text{🌾}}\leq \frac{\overline{Y}-\text{🌻}}{\text{🌾}}\leq \frac{\text{🍯}}{\text{🌾}})=P(\text{🍄}\leq Z\leq \text{🌸})\]
On Ti Calculator normalcdf(🍄,🌸,0,1)
Note:
Variance = 4 \(\rightarrow\sqrt{4}=\sigma^2\)
Standard Deviation = 4 \(\rightarrow 4=\sigma^2\)
\(Z=\frac{\overline{Y}-\mu}{\frac{\sigma^2}{\sqrt{n}}}=\frac{\sqrt{n}(\overline{Y}-\mu)}{\sigma^2}\)