<< Chapter < Page Chapter >> Page >
f ( x 1 , x 2 , , x n ) = f ( x 1 ) f ( x 2 ) f ( x n ) . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzamaabmaabaGaamiEamaaBaaaleaacaaIXaaabeaakiaacYcacaWG4bWaaSbaaSqaaiaaikdaaeqaaOGaaiilaiablAciljaacYcacaWG4bWaaSbaaSqaaiaad6gaaeqaaaGccaGLOaGaayzkaaGaeyypa0JaamOzamaabmaabaGaamiEamaaBaaaleaacaaIXaaabeaaaOGaayjkaiaawMcaaiaadAgadaqadaqaaiaadIhadaWgaaWcbaGaaGOmaaqabaaakiaawIcacaGLPaaacqWIVlctcaWGMbWaaeWaaeaacaWG4bWaaSbaaSqaaiaad6gaaeqaaaGccaGLOaGaayzkaaGaaiOlaaaa@52A1@

The pdf of the joint distribution shown in (1) is known as the likelihood function . If the sample were not independently drawn, the pdf of joint distribution could not be written in such a simple form because of the covariance among the members of the sample would not be equal to zero . The logarithm of this function (or as it is referred to, the log of the likelihood function) is given by the sum L ( x 1 , x 2 , , x n ) = ln f ( x 1 ) + ln f ( x 2 ) + + ln f ( x n ) = i = 1 n ln f ( x i ) . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamitamaabmaabaGaamiEamaaBaaaleaacaaIXaaabeaakiaacYcacaWG4bWaaSbaaSqaaiaaikdaaeqaaOGaaiilaiablAciljaacYcacaWG4bWaaSbaaSqaaiaad6gaaeqaaaGccaGLOaGaayzkaaGaeyypa0JaciiBaiaac6gacaWGMbWaaeWaaeaacaWG4bWaaSbaaSqaaiaaigdaaeqaaaGccaGLOaGaayzkaaGaey4kaSIaciiBaiaac6gacaWGMbWaaeWaaeaacaWG4bWaaSbaaSqaaiaaikdaaeqaaaGccaGLOaGaayzkaaGaey4kaSIaeS47IWKaey4kaSIaciiBaiaac6gacaWGMbWaaeWaaeaacaWG4bWaaSbaaSqaaiaad6gaaeqaaaGccaGLOaGaayzkaaGaeyypa0ZaaabCaeaaciGGSbGaaiOBaiaadAgadaqadaqaaiaadIhadaWgaaWcbaGaamyAaaqabaaakiaawIcacaGLPaaaaSqaaiaadMgacqGH9aqpcaaIXaaabaGaamOBaaqdcqGHris5aOGaaiOlaaaa@6846@ The maximum likelihood method involves choosing as estimators of the unknown parameters of the distribution the values that maximize the likelihood function. However, because the logarithm is a monotonically increasing function The function g ( y ) MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadEgadaqadaqaaiaadMhaaiaawIcacaGLPaaaaaa@395C@ is monotonically increasing for y if g ( y ) > 0. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiqadEgagaqbamaabmaabaGaamyEaaGaayjkaiaawMcaaiabg6da+iaaicdacaGGUaaaaa@3BDC@ Because d d x ln x = 1 x > 0 for x > 0 , MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaalaaabaGaamizaaqaaiaadsgacaWG4baaaiGacYgacaGGUbGaamiEaiabg2da9maalaaabaGaaGymaaqaaiaadIhaaaGaeyOpa4JaaGimaiaabccacaqGMbGaae4BaiaabkhacaqGGaGaamiEaiabg6da+iaaicdacaGGSaaaaa@47BE@ the logarithm function is monotonically increasing for positive values of x . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadIhacaGGUaaaaa@3798@ , maximizing the log of the likelihood function is equivalent to maximizing the likelihood function. The following example of this procedure illustrates how to derive ML estimators.

The ml estimator of the population mean and population variance.

Assume that x ~ N ( μ , σ 2 ) . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiEaiaac6hacaWGobWaaeWaaeaacqaH8oqBcaGGSaGaeq4Wdm3aaWbaaSqabeaacaaIYaaaaaGccaGLOaGaayzkaaGaaiOlaaaa@401D@ Consider a sample of size n drawn independently from this distribution. The likelihood function is the product of the pdf of each observation or:

f ( x i ) = 1 σ 2 π e ( x i μ ) 2 2 σ 2 L ( x 1 , x 2 , , x n ) = 1 σ n ( 2 π ) n 2 e i = 1 n ( x i μ ) 2 2 σ 2 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzamaabmaabaGaamiEamaaBaaaleaacaWGPbaabeaaaOGaayjkaiaawMcaaiabg2da9maalaaabaGaaGymaaqaaiabeo8aZnaakaaabaGaaGOmaiabec8aWbWcbeaaaaGccaWGLbWaaWbaaSqabeaacqGHsisldaWcaaqaamaabmaabaGaamiEamaaBaaameaacaWGPbaabeaaliabgkHiTiabeY7aTbGaayjkaiaawMcaamaaCaaameqabaGaaGOmaaaaaSqaaiaaikdacqaHdpWCdaahaaadbeqaaiaaikdaaaaaaaaakiabgkDiElaadYeadaqadaqaaiaadIhadaWgaaWcbaGaaGymaaqabaGccaGGSaGaamiEamaaBaaaleaacaaIYaaabeaakiaacYcacqWIMaYscaGGSaGaamiEamaaBaaaleaacaWGUbaabeaaaOGaayjkaiaawMcaaiabg2da9maalaaabaGaaGymaaqaaiabeo8aZnaaCaaaleqabaGaamOBaaaakmaabmaabaGaaGOmaiabec8aWbGaayjkaiaawMcaamaaCaaaleqabaWaaSqaaWqaaiaad6gaaeaacaaIYaaaaaaaaaGccaWGLbWaaWbaaSqabeaacqGHsisldaWcaaqaamaaqahabaWaaeWaaeaacaWG4bWaaSbaaWqaaiaadMgaaeqaaSGaeyOeI0IaeqiVd0gacaGLOaGaayzkaaWaaWbaaWqabeaacaaIYaaaaaqaaiaadMgacqGH9aqpcaaIXaaabaGaamOBaaGdcqGHris5aaWcbaGaaGOmaiabeo8aZnaaCaaameqabaGaaGOmaaaaaaaaaOGaaiOlaaaa@798B@

Thus, the log of the likelihood function of this sample is L ( x 1 , x 2 , , x n ) = n ln 2 π 2 n ln σ i = 1 n ( x i μ ) 2 2 σ 2 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamitamaabmaabaGaamiEamaaBaaaleaacaaIXaaabeaakiaacYcacaWG4bWaaSbaaSqaaiaaikdaaeqaaOGaaiilaiablAciljaacYcacaWG4bWaaSbaaSqaaiaad6gaaeqaaaGccaGLOaGaayzkaaGaeyypa0JaeyOeI0YaaSaaaeaacaWGUbGaciiBaiaac6gacaaIYaGaeqiWdahabaGaaGOmaaaacqGHsislcaWGUbGaciiBaiaac6gacqaHdpWCcqGHsisldaWcaaqaamaaqahabaWaaeWaaeaacaWG4bWaaSbaaSqaaiaadMgaaeqaaOGaeyOeI0IaeqiVd0gacaGLOaGaayzkaaWaaWbaaSqabeaacaaIYaaaaaqaaiaadMgacqGH9aqpcaaIXaaabaGaamOBaaqdcqGHris5aaGcbaGaaGOmaiabeo8aZnaaCaaaleqabaGaaGOmaaaaaaaaaa@6096@ In the ML method we want to find the estimators of the mean and variance, μ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGafqiVd0Mbambaaaa@37C4@ and σ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGafq4WdmNbambaaaa@37D1@ , that maximize the log of the likelihood function. Substituting in the parameter estimates into the log of the likelihood function gives our problem as:

M a x μ , σ L ( x 1 , x 2 , , x n ) = M a x μ , σ [ n ln 2 π 2 n ln σ ( x i μ ) 2 2 σ 2 ] . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaCbeaeaacaWGnbGaamyyaiaadIhaaSqaaiqbeY7aTzaataGaaiilaiqbeo8aZzaataaabeaakiaadYeadaqadaqaaiaadIhadaWgaaWcbaGaaGymaaqabaGccaGGSaGaamiEamaaBaaaleaacaaIYaaabeaakiaacYcacqWIMaYscaGGSaGaamiEamaaBaaaleaacaWGUbaabeaaaOGaayjkaiaawMcaaiabg2da9maaxababaGaamytaiaadggacaWG4baaleaacuaH8oqBgaWeaiaacYcacuaHdpWCgaWeaaqabaGcdaWadaqaaiabgkHiTmaalaaabaGaamOBaiGacYgacaGGUbGaaGOmaiabec8aWbqaaiaaikdaaaGaeyOeI0IaamOBaiGacYgacaGGUbGafq4WdmNbambacqGHsisldaWcaaqaamaaqaeabaWaaeWaaeaacaWG4bWaaSbaaSqaaiaadMgaaeqaaOGaeyOeI0IafqiVd0MbambaaiaawIcacaGLPaaadaahaaWcbeqaaiaaikdaaaaabeqab0GaeyyeIuoaaOqaaiaaikdacuaHdpWCgaWeamaaCaaaleqabaGaaGOmaaaaaaaakiaawUfacaGLDbaacaGGUaaaaa@6E6C@

Setting the derivatives of the log of the likelihood function with respect to μ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGafqiVd0Mbambaaaa@37C4@ and σ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGafq4WdmNbambaaaa@37D1@ equal to 0 gives:

L ( x 1 , x 2 , , x n ) μ = ( x i μ ) σ 2 = 0   and MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaSaaaeaacqGHciITcaWGmbWaaeWaaeaacaWG4bWaaSbaaSqaaiaaigdaaeqaaOGaaiilaiaadIhadaWgaaWcbaGaaGOmaaqabaGccaGGSaGaeSOjGSKaaiilaiaadIhadaWgaaWcbaGaamOBaaqabaaakiaawIcacaGLPaaaaeaacqGHciITcuaH8oqBgaWeaaaacqGH9aqpdaWcaaqaamaaqaeabaWaaeWaaeaacaWG4bWaaSbaaSqaaiaadMgaaeqaaOGaeyOeI0IafqiVd0MbambaaiaawIcacaGLPaaaaSqabeqaniabggHiLdaakeaacuaHdpWCgaWeamaaCaaaleqabaGaaGOmaaaaaaGccqGH9aqpcaaIWaGaaeiiaiaabccacaqGHbGaaeOBaiaabsgaaaa@585B@
L ( x 1 , x 2 , , x n ) σ = n σ + ( x i μ ) 2 σ 3 = 0. MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaSaaaeaacqGHciITcaWGmbWaaeWaaeaacaWG4bWaaSbaaSqaaiaaigdaaeqaaOGaaiilaiaadIhadaWgaaWcbaGaaGOmaaqabaGccaGGSaGaeSOjGSKaaiilaiaadIhadaWgaaWcbaGaamOBaaqabaaakiaawIcacaGLPaaaaeaacqGHciITcuaHdpWCgaWeaaaacqGH9aqpcqGHsisldaWcaaqaaiaad6gaaeaacuaHdpWCgaWeaaaacqGHRaWkdaWcaaqaamaaqaeabaWaaeWaaeaacaWG4bWaaSbaaSqaaiaadMgaaeqaaOGaeyOeI0IafqiVd0MbambaaiaawIcacaGLPaaadaahaaWcbeqaaiaaikdaaaaabeqab0GaeyyeIuoaaOqaaiqbeo8aZzaataWaaWbaaSqabeaacaaIZaaaaaaakiabg2da9iaaicdacaGGUaaaaa@5AA6@

Solving these two equations simultaneously gives:

μ = i = 1 n x i n = x ¯   and   σ 2 = ( x i μ ) 2 n . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGafqiVd0MbambacqGH9aqpdaWcaaqaamaaqahabaGaamiEamaaBaaaleaacaWGPbaabeaaaeaacaWGPbGaeyypa0JaaGymaaqaaiaad6gaa0GaeyyeIuoaaOqaaiaad6gaaaGaeyypa0JabmiEayaaraGaaeiiaiaabccacaqGHbGaaeOBaiaabsgacaqGGaGaaeiiaiqbeo8aZzaataWaaWbaaSqabeaacaaIYaaaaOGaeyypa0ZaaSaaaeaadaaeabqaamaabmaabaGaamiEamaaBaaaleaacaWGPbaabeaakiabgkHiTiqbeY7aTzaataaacaGLOaGaayzkaaWaaWbaaSqabeaacaaIYaaaaaqabeqaniabggHiLdaakeaacaWGUbaaaiaac6caaaa@5817@

Notice the fact that the estimator of the population mean is equal to the sample mean, a result that is the same as the one you found in your introductory statistics course. However, the unbiased estimator of the population variance used in that course is s 2 = ( x i μ ) 2 n 1 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaam4CamaaCaaaleqabaGaaGOmaaaakiabg2da9maalaaabaWaaabqaeaadaqadaqaaiaadIhadaWgaaWcbaGaamyAaaqabaGccqGHsislcuaH8oqBgaWeaaGaayjkaiaawMcaamaaCaaaleqabaGaaGOmaaaaaeqabeqdcqGHris5aaGcbaGaamOBaiabgkHiTiaaigdaaaGaaiOlaaaa@45A4@

Thus, one of the common "problems" with using a ML estimator is that quite often they are biased estimators of a population parameter. On the other hand, under very general conditions ML estimators are consistent , are asymptotically efficient , and have an asymptotically normal distribution (these are desirable large sample size characteristics of potential estimators and are discussed in advanced statistics courses). Intuitively, what these concepts mean is that as the sample size increases the estimator becomes more precise (the variance becomes smaller and an bias disappears) and the distribution of the estimator approaches the normal distribution. The formal definitions of these terms involve advanced statistical concepts that are reported here only in the interest of completeness. An estimator ( θ ^ ) MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaabmaabaGafqiUdeNbaKaaaiaawIcacaGLPaaaaaa@3938@ of the parameter θ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeI7aXbaa@379F@ is consistent if and only if p lim θ ^ = θ . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadchaciGGSbGaaiyAaiaac2gacuaH4oqCgaqcaiabg2da9iabeI7aXjaac6caaaa@3EE2@ This estimator has an asymptotically normal distribution if θ ^ a N ( θ , { I ( θ ) } 1 ) . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiqbeI7aXzaajaWaaCbiaeaacqGHsgIRaSqabeaacaWGHbaaaOGaamOtamaabmaabaGaeqiUdeNaaiilamaacmaabaGaaCysamaabmaabaGaeqiUdehacaGLOaGaayzkaaaacaGL7bGaayzFaaWaaWbaaSqabeaacqGHsislcaaIXaaaaaGccaGLOaGaayzkaaGaaiOlaaaa@486A@ An unbiased estimator is more efficient that another unbiased estimator if it has a smaller variance than the alternative estimator. An asymptotically efficient is an estimator whose mean square error tends to zero as the sample size increases. The mean square error (MSE) is defined to be M S E ( θ ^ ) = E [ ( θ ^ θ ) 2 ] = V ( θ ^ ) + ( B i a s [ θ ^ ] ) 2 . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaad2eacaWGtbGaamyramaabmaabaGafqiUdeNbaKaaaiaawIcacaGLPaaacqGH9aqpcaWGfbWaamWaaeaadaqadaqaaiqbeI7aXzaajaGaeyOeI0IaeqiUdehacaGLOaGaayzkaaWaaWbaaSqabeaacaaIYaaaaaGccaGLBbGaayzxaaGaeyypa0JaamOvamaabmaabaGafqiUdeNbaKaaaiaawIcacaGLPaaacqGHRaWkdaqadaqaaiaadkeacaWGPbGaamyyaiaadohadaWadaqaaiqbeI7aXzaajaaacaGLBbGaayzxaaaacaGLOaGaayzkaaWaaWbaaSqabeaacaaIYaaaaOGaaiOlaaaa@56DE@ An estimator is asymptotically efficient if lim n M S E ( θ ^ ) = 0. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaaxababaGaciiBaiaacMgacaGGTbaaleaacaWGUbGaeyOKH4QaeyOhIukabeaakiaad2eacaWGtbGaamyramaabmaabaGafqiUdeNbaKaaaiaawIcacaGLPaaacqGH9aqpcaaIWaGaaiOlaaaa@4582@ See any advanced statistics text or Statistical terminology for further information on these concepts.

Application of the ml method to regressions

The discussion above illustrates the basics of the ML method—you form the log of the likelihood function and then find the values of the parameter estimates that maximize this function. In most cases the maximization will not yield answers in closed form—that is, you cannot find a neat algebraic formula as we did for the population mean. However, you can use computer programs to search for the values of the parameter estimates that maximize this function. Thus, in most cases in advanced regression models you often will treat the ML method as a “black box” and not concern yourself with the estimation details. However, I illustrate one more example of the ML technique.

The ml estimators for a simple regression.

Assume that we want to estimate the population parameters for the regression model y i = β x i + ε i , MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWGPbaabeaakiabg2da9iabek7aIjaadIhadaWgaaWcbaGaamyAaaqabaGccqGHRaWkcqaH1oqzdaWgaaWcbaGaamyAaaqabaaaaa@4081@ where we assume that

  1. ε i ~ N ( 0 , σ 2 ) , MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyTdu2aaSbaaSqaaiaadMgaaeqaaOGaaiOFaiaad6eadaqadaqaaiaaicdacaGGSaGaeq4Wdm3aaWbaaSqabeaacaaIYaaaaaGccaGLOaGaayzkaaGaaiilaaaa@40ED@
  2. E ( ε i ε j ) = 0 MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyramaabmaabaGaeqyTdu2aaSbaaSqaaiaadMgaaeqaaOGaeqyTdu2aaSbaaSqaaiaadQgaaeqaaaGccaGLOaGaayzkaaGaeyypa0JaaGimaaaa@3F9E@ for i j , MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyAaiabgcMi5kaadQgacaGGSaaaaa@3A48@
  3. y i = Y i Y ¯ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWGPbaabeaakiabg2da9iaadMfadaWgaaWcbaGaamyAaaqabaGccqGHsislceWGzbGbaebaaaa@3D01@ and x i = X i X ¯ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiEamaaBaaaleaacaWGPbaabeaakiabg2da9iaadIfadaWgaaWcbaGaamyAaaqabaGccqGHsislceWGybGbaebaaaa@3CFE@ (this assumption allows us to ignore the estimation of the intercept term), and
  4. x i MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiEamaaBaaaleaacaWGPbaabeaaaaa@380B@ is a non-stochastic variable.

The assumption of a normally distributed error term implies that ε i = y i β x i ~ N ( 0 , σ 2 ) . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyTdu2aaSbaaSqaaiaadMgaaeqaaOGaeyypa0JaamyEamaaBaaaleaacaWGPbaabeaakiabgkHiTiabek7aIjaadIhadaWgaaWcbaGaamyAaaqabaGccaGG+bGaamOtamaabmaabaGaaGimaiaacYcacqaHdpWCdaahaaWcbeqaaiaaikdaaaaakiaawIcacaGLPaaacaGGUaaaaa@48C6@ Thus, the pdf of the error term is f ( ε i ) = 1 σ 2 π e ( y i β x i ) 2 2 σ 2 MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzamaabmaabaGaeqyTdu2aaSbaaSqaaiaadMgaaeqaaaGccaGLOaGaayzkaaGaeyypa0ZaaSaaaeaacaaIXaaabaGaeq4Wdm3aaOaaaeaacaaIYaGaeqiWdahaleqaaaaakiaadwgadaahaaWcbeqaaiabgkHiTmaalaaabaWaaeWaaeaacaWG5bWaaSbaaWqaaiaadMgaaeqaaSGaeyOeI0IaeqOSdiMaamiEamaaBaaameaacaWGPbaabeaaaSGaayjkaiaawMcaamaaCaaameqabaGaaGOmaaaaaSqaaiaaikdacqaHdpWCdaahaaadbeqaaiaaikdaaaaaaaaakiaac6caaaa@50F0@ and, thus, the likelihood function The symbol i = 1 n x 1 MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaarahabaGaamiEamaaBaaaleaacaaIXaaabeaaaeaacaWGPbGaeyypa0JaaGymaaqaaiaad6gaa0Gaey4dIunaaaa@3D95@ is equivalent to the product x 1 x 2 x n . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadIhadaWgaaWcbaGaaGymaaqabaGccaWG4bWaaSbaaSqaaiaaikdaaeqaaOGaeS47IWKaamiEamaaBaaaleaacaWGUbaabeaakiaac6caaaa@3E8C@ is:

i = 1 n f ( ε i ) = i = 1 n 1 σ 2 π e ( y i β x i ) 2 2 σ 2 = ( 1 σ 2 π ) n i = 1 n e ( y i β x i ) 2 2 σ 2 MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaebCaeaacaWGMbWaaeWaaeaacqaH1oqzdaWgaaWcbaGaamyAaaqabaaakiaawIcacaGLPaaaaSqaaiaadMgacqGH9aqpcaaIXaaabaGaamOBaaqdcqGHpis1aOGaeyypa0ZaaebCaeaadaWcaaqaaiaaigdaaeaacqaHdpWCdaGcaaqaaiaaikdacqaHapaCaSqabaaaaOGaamyzamaaCaaaleqabaGaeyOeI0YaaSaaaeaadaqadaqaaiaadMhadaWgaaadbaGaamyAaaqabaWccqGHsislcqaHYoGycaWG4bWaaSbaaWqaaiaadMgaaeqaaaWccaGLOaGaayzkaaWaaWbaaWqabeaacaaIYaaaaaWcbaGaaGOmaiabeo8aZnaaCaaameqabaGaaGOmaaaaaaaaaaWcbaGaamyAaiabg2da9iaaigdaaeaacaWGUbaaniabg+GivdGccqGH9aqpdaqadaqaamaalaaabaGaaGymaaqaaiabeo8aZnaakaaabaGaaGOmaiabec8aWbWcbeaaaaaakiaawIcacaGLPaaadaahaaWcbeqaaiaad6gaaaGcdaqeWbqaaiaadwgadaahaaWcbeqaaiabgkHiTmaalaaabaWaaeWaaeaacaWG5bWaaSbaaWqaaiaadMgaaeqaaSGaeyOeI0IaeqOSdiMaamiEamaaBaaameaacaWGPbaabeaaaSGaayjkaiaawMcaamaaCaaameqabaGaaGOmaaaaaSqaaiaaikdacqaHdpWCdaahaaadbeqaaiaaikdaaaaaaaaaaSqaaiaadMgacqGH9aqpcaaIXaaabaGaamOBaaqdcqGHpis1aaaa@7976@

and the log of the likelihood function is L ( ε 1 , ε 2 , , ε n ) = n ln 2 π n ln σ i = 1 n ( y i β x i ) 2 2 σ 2 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamitamaabmaabaGaeqyTdu2aaSbaaSqaaiaaigdaaeqaaOGaaiilaiabew7aLnaaBaaaleaacaaIYaaabeaakiaacYcacqWIMaYscaGGSaGaeqyTdu2aaSbaaSqaaiaad6gaaeqaaaGccaGLOaGaayzkaaGaeyypa0JaeyOeI0IaamOBaiGacYgacaGGUbWaaOaaaeaacaaIYaGaeqiWdahaleqaaOGaeyOeI0IaamOBaiGacYgacaGGUbGafq4WdmNbambacqGHsisldaWcaaqaamaaqahabaWaaeWaaeaacaWG5bWaaSbaaSqaaiaadMgaaeqaaOGaeyOeI0IafqOSdiMbambacaWG4bWaaSbaaSqaaiaadMgaaeqaaaGccaGLOaGaayzkaaWaaWbaaSqabeaacaaIYaaaaaqaaiaadMgacqGH9aqpcaaIXaaabaGaamOBaaqdcqGHris5aaGcbaGaaGOmaiqbeo8aZzaataWaaWbaaSqabeaacaaIYaaaaaaakiaac6caaaa@6504@

We find the estimators β MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGafqOSdiMbambaaaa@37AF@ and σ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGafq4WdmNbambaaaa@37D1@ in the same manner as we did for the sample mean and variance. Differentiating the log of the likelihood function and setting these first derivatives equal to 0 gives the following two first-order conditions:

L ( ε 1 , ε 2 , , ε n ) β = 2 i = 1 n ( y i β x i ) x i 2 σ 2 = 0 MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaSaaaeaacqGHciITcaWGmbWaaeWaaeaacqaH1oqzdaWgaaWcbaGaaGymaaqabaGccaGGSaGaeqyTdu2aaSbaaSqaaiaaikdaaeqaaOGaaiilaiablAciljaacYcacqaH1oqzdaWgaaWcbaGaamOBaaqabaaakiaawIcacaGLPaaaaeaacqGHciITcuaHYoGygaWeaaaacqGH9aqpdaWcaaqaaiaaikdadaaeWbqaamaabmaabaGaamyEamaaBaaaleaacaWGPbaabeaakiabgkHiTiqbek7aIzaataGaamiEamaaBaaaleaacaWGPbaabeaaaOGaayjkaiaawMcaaiaadIhadaWgaaWcbaGaamyAaaqabaaabaGaamyAaiabg2da9iaaigdaaeaacaWGUbaaniabggHiLdaakeaacaaIYaGafq4WdmNbambadaahaaWcbeqaaiaaikdaaaaaaOGaeyypa0JaaGimaaaa@5FA3@

and

L ( ε 1 , ε 2 , , ε n ) σ = n σ + i = 1 n ( y i β x i ) 2 σ 3 = 0. MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaSaaaeaacqGHciITcaWGmbWaaeWaaeaacqaH1oqzdaWgaaWcbaGaaGymaaqabaGccaGGSaGaeqyTdu2aaSbaaSqaaiaaikdaaeqaaOGaaiilaiablAciljaacYcacqaH1oqzdaWgaaWcbaGaamOBaaqabaaakiaawIcacaGLPaaaaeaacqGHciITcuaHdpWCgaWeaaaacqGH9aqpcqGHsisldaWcaaqaaiaad6gaaeaacuaHdpWCgaWeaaaacqGHRaWkdaWcaaqaamaaqahabaWaaeWaaeaacaWG5bWaaSbaaSqaaiaadMgaaeqaaOGaeyOeI0IafqOSdiMbambacaWG4bWaaSbaaSqaaiaadMgaaeqaaaGccaGLOaGaayzkaaWaaWbaaSqabeaacaaIYaaaaaqaaiaadMgacqGH9aqpcaaIXaaabaGaamOBaaqdcqGHris5aaGcbaGafq4WdmNbambadaahaaWcbeqaaiaaiodaaaaaaOGaeyypa0JaaGimaiaac6caaaa@6281@

Thus, the ML estimators are:

β = i = 1 n y i x i i = 1 n x i 2   and   σ 2 = i = 1 n ( y i β x i ) 2 n . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGafqOSdiMbambacqGH9aqpdaWcaaqaamaaqahabaGaamyEamaaBaaaleaacaWGPbaabeaakiaadIhadaWgaaWcbaGaamyAaaqabaaabaGaamyAaiabg2da9iaaigdaaeaacaWGUbaaniabggHiLdaakeaadaaeWbqaaiaadIhadaqhaaWcbaGaamyAaaqaaiaaikdaaaaabaGaamyAaiabg2da9iaaigdaaeaacaWGUbaaniabggHiLdaaaOGaaeiiaiaabccacaqGHbGaaeOBaiaabsgacaqGGaGaaeiiaiqbeo8aZzaataWaaWbaaSqabeaacaaIYaaaaOGaeyypa0ZaaSaaaeaadaaeWbqaamaabmaabaGaamyEamaaBaaaleaacaWGPbaabeaakiabgkHiTiqbek7aIzaataGaamiEamaaBaaaleaacaWGPbaabeaaaOGaayjkaiaawMcaamaaCaaaleqabaGaaGOmaaaaaeaacaWGPbGaeyypa0JaaGymaaqaaiaad6gaa0GaeyyeIuoaaOqaaiaad6gaaaGaaiOlaaaa@65AA@

Notice that in this simple case the ML estimator of β MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqOSdigaaa@3795@ is the same as the OLS estimator of β MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqOSdigaaa@3795@ . Also, notice that the ML estimator of σ 2 MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeq4Wdm3aaWbaaSqabeaacaaIYaaaaaaa@38A0@ is biased—the (unbiased) OLS estimator of σ 2 MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeq4Wdm3aaWbaaSqabeaacaaIYaaaaaaa@38A0@ is s 2 = i = 1 n ( y i β x i ) 2 n 2 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaam4CamaaCaaaleqabaGaaGOmaaaakiabg2da9maalaaabaWaaabCaeaadaqadaqaaiaadMhadaWgaaWcbaGaamyAaaqabaGccqGHsislcuaHYoGygaWeaiaadIhadaWgaaWcbaGaamyAaaqabaaakiaawIcacaGLPaaadaahaaWcbeqaaiaaikdaaaaabaGaamyAaiabg2da9iaaigdaaeaacaWGUbaaniabggHiLdaakeaacaWGUbGaeyOeI0IaaGOmaaaacaGGUaaaaa@4B82@

You can use the examples in this module as the basis of your understanding of the ML method. When you see that the ML method is used in a computer program, you can be fairly certain that the program uses one of the many optimizing subroutines to find the maximum of the log of the likelihood program. You can consult the help files with the computer program to see what underlying distribution is used to set up the log of the likelihood function. A concept related to the maximum likelihood estimation method worth exploring is the likelihood ratio test (see the module by Don Johnson entitled The Likelihood Ratio Test for an introduction to this key statistical test.)

Exercises

Consider the following functions. For each of them, (1) prove that the function is a pdf; (2) calculate the mean and variance of each distribution, and (3) find the maximum likelihood estimator of the parameter θ . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeI7aXjaac6caaaa@3851@ Sketch a graph of each of the distributions for a representative value of θ . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeI7aXjaac6caaaa@3851@

  1. f ( x ; θ ) = ( θ + 1 ) x θ where   0 x 1 MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaabccacaaIWaGaeyizImQaamiEaiabgsMiJkaaigdaaaa@3C68@ and θ > 0. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeI7aXjabg6da+iaaicdacaGGUaaaaa@3A13@
  2. f ( x ; θ ) = θ e θ x MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadAgadaqadaqaaiaadIhacaGG7aGaeqiUdehacaGLOaGaayzkaaGaeyypa0JaeqiUdeNaamyzamaaCaaaleqabaGaeyOeI0IaeqiUdeNaamiEaaaaaaa@4342@ where 0 x < MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaaicdacqGHKjYOcaWG4bGaeyipaWJaeyOhIukaaa@3BCA@ and θ > 0. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeI7aXjabg6da+iaaicdacaGGUaaaaa@3A13@

Questions & Answers

what is biology
Hajah Reply
the study of living organisms and their interactions with one another and their environments
AI-Robot
what is biology
Victoria Reply
HOW CAN MAN ORGAN FUNCTION
Alfred Reply
the diagram of the digestive system
Assiatu Reply
allimentary cannel
Ogenrwot
How does twins formed
William Reply
They formed in two ways first when one sperm and one egg are splited by mitosis or two sperm and two eggs join together
Oluwatobi
what is genetics
Josephine Reply
Genetics is the study of heredity
Misack
how does twins formed?
Misack
What is manual
Hassan Reply
discuss biological phenomenon and provide pieces of evidence to show that it was responsible for the formation of eukaryotic organelles
Joseph Reply
what is biology
Yousuf Reply
the study of living organisms and their interactions with one another and their environment.
Wine
discuss the biological phenomenon and provide pieces of evidence to show that it was responsible for the formation of eukaryotic organelles in an essay form
Joseph Reply
what is the blood cells
Shaker Reply
list any five characteristics of the blood cells
Shaker
lack electricity and its more savely than electronic microscope because its naturally by using of light
Abdullahi Reply
advantage of electronic microscope is easily and clearly while disadvantage is dangerous because its electronic. advantage of light microscope is savely and naturally by sun while disadvantage is not easily,means its not sharp and not clear
Abdullahi
cell theory state that every organisms composed of one or more cell,cell is the basic unit of life
Abdullahi
is like gone fail us
DENG
cells is the basic structure and functions of all living things
Ramadan
What is classification
ISCONT Reply
is organisms that are similar into groups called tara
Yamosa
in what situation (s) would be the use of a scanning electron microscope be ideal and why?
Kenna Reply
A scanning electron microscope (SEM) is ideal for situations requiring high-resolution imaging of surfaces. It is commonly used in materials science, biology, and geology to examine the topography and composition of samples at a nanoscale level. SEM is particularly useful for studying fine details,
Hilary
cell is the building block of life.
Condoleezza Reply
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Econometrics for honors students. OpenStax CNX. Jul 20, 2010 Download for free at http://cnx.org/content/col11208/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Econometrics for honors students' conversation and receive update notifications?

Ask