<< Chapter < Page Chapter >> Page >
The mean value and the variance give important information about the distribution for a real random variable X. We consider the expectation of an appropriate function of a pair (X, Y) which gives useful information about their joint distribution. This is the covariance function.

Covariance and the correlation coefficient

The mean value μ X = E [ X ] and the variance σ X 2 = E [ ( X - μ X ) 2 ] give important information about the distribution for real random variable X . Can the expectation of an appropriate function of ( X , Y ) give useful information about the joint distribution? A clue to one possibility is given in the expression

Var [ X ± Y ] = Var [ X ] + Var [ Y ] ± 2 E [ X Y ] - E [ X ] E [ Y ]

The expression E [ X Y ] - E [ X ] E [ Y ] vanishes if the pair is independent (and in some other cases). We note also that for μ X = E [ X ] and μ Y = E [ Y ]

E [ ( X - μ X ) ( Y - μ Y ) ] = E [ X Y ] - μ X μ Y

To see this, expand the expression ( X - μ X ) ( Y - μ Y ) and use linearity to get

E [ ( X - μ X ) ( Y - μ Y ) ] = E [ X Y - μ Y X - μ X Y + μ X μ Y ] = E [ X Y ] - μ Y E [ X ] - μ X E [ Y ] + μ X μ Y

which reduces directly to the desired expression. Now for given ω , X ( ω ) - μ X is the variation of X from its mean and Y ( ω ) - μ Y is the variation of Y from its mean. For this reason, the following terminology is used.

Definition . The quantity Cov [ X , Y ] = E [ ( X - μ X ) ( Y - μ Y ) ] is called the covariance of X and Y .

If we let X ' = X - μ X and Y ' = Y - μ Y be the centered random variables, then

Cov [ X , Y ] = E [ X ' Y ' ]

Note that the variance of X is the covariance of X with itself.

If we standardize, with X * = ( X - μ X ) / σ X and Y * = ( Y - μ Y ) / σ Y , we have

Definition . The correlation coefficient ρ = ρ [ X , Y ] is the quantity

ρ [ X , Y ] = E [ X * Y * ] = E [ ( X - μ X ) ( Y - μ Y ) ] σ X σ Y

Thus ρ = Cov [ X , Y ] / σ X σ Y . We examine these concepts for information on the joint distribution. By Schwarz' inequality (E15), we have

ρ 2 = E 2 [ X * Y * ] E [ ( X * ) 2 ] E [ ( Y * ) 2 ] = 1 with equality iff Y * = c X *

Now equality holds iff

1 = c 2 E 2 [ ( X * ) 2 ] = c 2 which implies c = ± 1 and ρ = ± 1

We conclude - 1 ρ 1 , with ρ = ± 1 iff Y * = ± X *

Relationship between ρ and the joint distribution

  • We consider first the distribution for the standardized pair ( X * , Y * )
  • Since P ( X * r , Y * s ) = P X - μ X σ X r , Y - μ Y σ Y s
    = P ( X t = σ X r + μ X , Y u = σ Y s + μ Y )
    we obtain the results for the distribution for ( X , Y ) by the mapping
    t = σ X r + μ X u = σ Y s + μ Y

Joint distribution for the standardized variables ( X * , Y * ) , ( r , s ) = ( X * , Y * ) ( ω )

  • ρ = 1 iff X * = Y * iff all probability mass is on the line s = r .
  • ρ = - 1 iff X * = - Y * iff all probability mass is on the line s = - r .

If - 1 < ρ < 1 , then at least some of the mass must fail to be on these lines.

Figure one is comprised of a diagonal line with a right triangle. A portion of the line is the base of the triangle. The line is labeled, s = r. One point of the triangle located on the diagonal line is labeled (r, r). The point of the triangle that is not located on the line is labeled, (r, s). The side of the triangle in between these two labeled points is labeled as the absolute value of s - r. The side of the triangle on the line is not labeled. The third side is labeled as the absolute value of s - r divided by the square root of two. Figure one is comprised of a diagonal line with a right triangle. A portion of the line is the base of the triangle. The line is labeled, s = r. One point of the triangle located on the diagonal line is labeled (r, r). The point of the triangle that is not located on the line is labeled, (r, s). The side of the triangle in between these two labeled points is labeled as the absolute value of s - r. The side of the triangle on the line is not labeled. The third side is labeled as the absolute value of s - r divided by the square root of two.
Distance from point (r,s) to the line s = r.

The ρ = ± 1 lines for the ( X , Y ) distribution are:

u - μ Y σ Y = ± t - μ X σ X or u = ± σ Y σ X ( t - μ X ) + μ Y

Consider Z = Y * - X * . Then E [ 1 2 Z 2 ] = 1 2 E [ ( Y * - X * ) 2 ] . Reference to [link] shows this is the average of the square of the distancesof the points ( r , s ) = ( X * , Y * ) ( ω ) from the line s = r (i.e., the variance about the line s = r ). Similarly for W = Y * + X * , E [ W 2 / 2 ] is the variance about s = - r . Now

1 2 E [ ( Y * ± X * ) 2 ] = 1 2 E [ ( Y * ) 2 ] + E [ ( X * ) 2 ] ± 2 E [ X * Y * ] = 1 ± ρ

Thus

  • 1 - ρ is the variance about s = r (the ρ = 1 line)
  • 1 + ρ is the variance about s = - r (the ρ = - 1 line)

Now since

E [ ( Y * - X * ) 2 ] = E [ ( Y * + X * ) 2 ] iff ρ = E [ X * Y * ] = 0

the condition ρ = 0 is the condition for equality of the two variances.

Transformation to the ( X , Y ) plane

t = σ X r + μ X u = σ Y s + μ Y r = t - μ X σ X s = u - μ Y σ Y

The ρ = 1 line is:

u - μ Y σ Y = t - μ X σ X or u = σ Y σ X ( t - μ X ) + μ Y

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Applied probability. OpenStax CNX. Aug 31, 2009 Download for free at http://cnx.org/content/col10708/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Applied probability' conversation and receive update notifications?

Ask