The continuous random variable
x has log normal distribution if
y has a normal distribution and
$x={e}^{y}.$ Thus, if
$$y\sim N\left(\mu ,{\sigma}^{2}\right),$$ then the pdf of a log normal distribution is
$f\left(x\right)=\left\{\begin{array}{l}\frac{1}{x\sigma \sqrt{2\pi}}{e}^{-\frac{{\left(\mathrm{ln}\left(x\right)-\mu \right)}^{2}}{2{\sigma}^{2}}}\text{,for}x0\\ 0\text{otherwise}\end{array}\right\}.$ The mean and variance of
x are
${\mu}_{x}={e}^{\mu +{\scriptscriptstyle \frac{{\sigma}^{2}}{2}}}$ and
${\sigma}_{x}^{2}=\left({e}^{{\sigma}^{2}}-1\right){e}^{2\mu +{\sigma}^{2}}.$ Because the distribution is skewed downward for variances over 1, the log normal distribution is sometimes used to describe income distributions (where there are relatively few very wealthy people and incomes generally are positive. Figure 4 shows the graphs of the pdf and cumulative functions for the log normal distributions for two values of
σ .

Gamma distribution.

A positive random variable
x has a gamma distribution if its pdf is
$$f\left(x\right)=\frac{1}{\Gamma \left(\alpha \right){\beta}^{\alpha}}{x}^{\alpha -1}{e}^{-{\scriptscriptstyle \frac{x}{\beta}}}$$ for
$x>0$ and 0 elsewhere.
$\Gamma \left(\alpha \right)$ is known as the gamma function and is defined to be
$$\Gamma \left(\alpha \right)={\displaystyle {\int}_{0}^{\infty}{y}^{\alpha -1}{e}^{-y}dy}=\left(\alpha -1\right)!.$$ The gamma function is often used to model waiting times like waiting for death. Its mean and variance are given by
$\mu =\alpha \beta $ and
${\sigma}^{2}=\alpha {\beta}^{2}.$

Chi-square distribution.

A chi-square distribution (
${\chi}^{2}\left(k\right)$ ) is the sum of
k independent standard normal random variables and is a special case of the gamma distribution (with
$\alpha =\frac{k}{2}$ and
$\beta =2$ ). The pdf of a chi-square distribution with
k degrees of freedom is
$$f\left(x\right)=\frac{1}{{2}^{{\scriptscriptstyle \raisebox{1ex}{$k$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}}\Gamma \left({\scriptscriptstyle \frac{k}{2}}\right)}{x}^{{\scriptscriptstyle \frac{k}{2}}-1}{e}^{-{\scriptscriptstyle \frac{x}{2}}}$$ where
x >0. Its mean and variance are
$\mu =k$ and
${\sigma}^{2}=2k.$ If
$y={\displaystyle \sum _{i=1}^{k}{x}_{i}^{2}}$ where the
x
_{i} 's are independently drawn from the standard normal distribution (N(1, 0)), then
${y}_{i}\sim {\chi}^{2}\left(k\right).$

Student's t-distribution.

Consider two random variables,
x and
v . Assume that
$x\sim N\left(0,1\right)$ and
$v\sim {\chi}^{2}\left(r\right)$ and are stochastically independent. Then the random variable
$t=\frac{w}{\sqrt{\frac{v}{r}}}$ has the t-distribution with
rdegrees of freedom . The pdf and cumulative function of
t are
$$f\left(t\right)=\frac{\Gamma \left(\frac{r+1}{2}\right)}{\sqrt{r\pi}\Gamma \left(\frac{r}{2}\right)}{\left(1+\frac{{t}^{2}}{r}\right)}^{-\left(\frac{r+1}{2}\right)}$$ and
$$F\left(t\right)=\frac{1}{2}+t\Gamma \left(\frac{t}{2}\right).$$ The mean and variance of the distribution are 0 for
$r>1$ and
$\frac{r}{r-2}$ for
$t>2,$ respectively.
The mean of the t-distribution is undefined for
$t\le 1.$ The variance of the distribution is
$\infty $ for
$1<r\le 2$ and undefined for
$r\le 1.$ The t-distribution plays a prominent role in hypothesis testing that is well-known to all undergraduate economics majors.

F distribution.

Consider two stochastically independent chi-square random variable such that
$u\sim {\text{\chi}}^{2}\left({r}_{1}\right)$ and
$v\sim {\text{\chi}}^{2}\left({r}_{2}\right)$ and
$u,v>0.$ The new random variable
$$f=\frac{\raisebox{1ex}{$u$}\!\left/ \!\raisebox{-1ex}{${r}_{1}$}\right.}{\raisebox{1ex}{$v$}\!\left/ \!\raisebox{-1ex}{${r}_{2}$}\right.}$$ has a F-distribution with
${r}_{1}$ and
${r}_{2}$ degrees of freedom. The pdf for the F-distribution is
$$g\left(f\right)=\frac{\Gamma \left(\frac{{r}_{1}+{r}_{2}}{2}\right)\left(\frac{{r}_{1}}{{r}_{2}}\right)}{\Gamma \left(\frac{{r}_{1}}{2}\right)\Gamma \left(\frac{{r}_{2}}{2}\right)}\frac{{f}^{\frac{{r}_{1}}{2}-1}}{{\left(1+\frac{{r}_{1}f}{{r}_{2}}\right)}^{\frac{{r}_{1}+{r}_{2}}{2}}}.$$ The F-distribution is used in testing if population variances are equal and in performing likelihood ratio tests.

Multinomial distribution.

Consider the
n random variables
${x}_{1},{x}_{2},\cdots ,{x}_{n}$ where each variable has a normal distribution—that is,
${x}_{i}\sim N\left({\mu}_{i},{\sigma}_{i}^{2}\right)$ and the covariance between of the variables is
${\sigma}_{ij}=E\left[\left({x}_{i}-{\mu}_{i}\right)({x}_{j}-{\mu}_{j}.)\right]$ We can arrange the variances and covariances into a
n -by-
n matrix where
$$\Sigma =\left[\begin{array}{cccc}{\sigma}_{1}^{2}& {\sigma}_{12}& \cdots & {\sigma}_{1n}\\ {\sigma}_{21}& {\sigma}_{2}^{2}& \cdots & {\sigma}_{2n}\\ \vdots & \vdots & \ddots & \vdots \\ {\sigma}_{n1}& {\sigma}_{n2}& \cdots & {\sigma}_{n}^{2}\end{array}\right]$$ that is known as the variance-covariance matrix. Define the vector
$\left(x-\mu \right)=\left(\begin{array}{c}{x}_{1}-{\mu}_{1}\\ \vdots \\ {x}_{n}-{\mu}_{n}\end{array}\right)$ and
${\left(x-\mu \right)}^{\prime}$ as its transpose. Then,
$${\left(x-\mu \right)}^{\prime}\Sigma \left(x-\mu \right)={\displaystyle \sum _{i=1}^{n}{\displaystyle \sum _{j=1}^{n}\left({x}_{i}-{\mu}_{i}\right)\left({x}_{j}-{\mu}_{j}\right){\sigma}_{ij}}},$$ where
$${\sigma}_{ii}={\sigma}_{i}^{2}.$$ If
$\left|\Sigma \right|$ is the determinant of the variance-covariance matrix, then the pdf for the joint distribution of these random variables is
$$f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)=\frac{1}{{\left(2\pi \right)}^{n/2}{\left|\Sigma \right|}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}}{e}^{-\frac{1}{2}{\left(x-\mu \right)}^{\prime}\Sigma \left(x-\mu \right)}.$$ If the random variables are stochastically independent the covariances are equal to 0 and the pdf becomes
$$f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)=\frac{1}{{\left(2\pi \right)}^{n/2}{\left({\displaystyle \prod _{i=1}^{n}{\sigma}_{1}^{2}}\right)}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}}{e}^{-\frac{1}{2}{\displaystyle \sum _{i=1}^{n}\frac{{\left({x}_{i}-{\mu}_{i}\right)}^{2}}{{\sigma}_{i}^{2}}}}.$$ If the
n random variables are all drawn from the same normal distribution with a mean of μ and a variance of
${\sigma}^{2},$ then the pdf simplifies to
$$f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)=\frac{1}{{\left(2\pi {\sigma}^{2}\right)}^{n/2}}{e}^{-\frac{1}{2{\sigma}^{2}}{\displaystyle \sum _{i=1}^{n}{\left({x}_{i}-\mu \right)}^{2}}}.$$

Characteristics of an estimator of a population parameter θ

Finite estimators

Bias.

The bias of an estimator is defined to be
$$B\left(\widehat{\theta}\right)=E\left(\widehat{\theta}\right)-\theta .$$ An estimator is unbiased if and only if
$B\left(\widehat{\theta}\right)=0.$

Mean square error.

The mean square error (MSE) of an estimator is defined to be
$$MSE\left(\widehat{\theta}\right)=E\left[{\left(\widehat{\theta}-\theta \right)}^{2}\right].$$ It is relatively easy to show that
$$MSE\left(\widehat{\theta}\right)=V\left(\widehat{\theta}\right)+{\left(B\left(\widehat{\theta}\right)\right)}^{2}.$$ Often a biased estimator with a smaller MSE may be preferred to an unbiased estimator with a relatively larger MSE.

Efficiency.

An estimator
$\widehat{\theta}$ is relatively more efficient than
$\tilde{\theta}$ if and only if
$V\left(\widehat{\theta}\right)<V\left(\tilde{\theta}\right).$ Generally, we would prefer to use the most efficient estimator available (if it is unbiased).

Asymtoptic estimators

Plim.

${x}_{n}$ converges to a constant,
c , if
${\mathrm{lim}}_{n\to \infty}\mathrm{Pr}\left(\left|{x}_{n}-c\right|>\epsilon \right)=0$ for any positive
$\epsilon .$ We can write this relationship as
$p\mathrm{lim}{x}_{n}=c.$

Greene
Greene, William H. (1990).
Econometric Analysis (New York: Macmillan Publishing Company): 103. offers this example of plim: Suppose
${x}_{n}$ equals 0 with probability
$1-\left(\frac{1}{n}\right)$ and
n with probability
$\left(\frac{1}{n}\right).$ As
n increases, the second point becomes more remote from the first point. However, at the same time the probability of observing the second point becomes more and more unlikely. This effect is shown in Figure 5 where as
n increases the probability distribution concentrates more and more on 1.

Consistency.

The estimator
$\widehat{\theta}$ is a consistent estimator of
θ if and only if
$p\mathrm{lim}\widehat{\theta}=\theta .$

Asymmtotically unbiased.

An estimator
$\widehat{\theta}$ is an asymtotically unbiased estimator of
θ if
${\mathrm{lim}}_{n\to \infty}E\left[\widehat{\theta}\right]=\theta .$

fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball.

Tarell

what is the actual application of fullerenes nowadays?

Damian

That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes.

Tarell

what is the Synthesis, properties,and applications of carbon nano chemistry

Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.

Harper

Do you know which machine is used to that process?