<< Chapter < Page Chapter >> Page >

The ρ = - 1 line is:

u - μ Y σ Y = - t - μ X σ X or u = - σ Y σ X ( t - μ X ) + μ Y

1 - ρ is proportional to the variance abut the ρ = 1 line and 1 + ρ is proportional to the variance about the ρ = - 1 line. ρ = 0 iff the variances about both are the same.

Uncorrelated but not independent

Suppose the joint density for { X , Y } is constant on the unit circle about the origin. By the rectangle test, the pair cannot be independent. By symmetry, the ρ = 1 line is u = t and the ρ = - 1 line is u = - t . By symmetry, also, the variance about each of these lines is the same. Thus ρ = 0 , which is true iff Cov [ X , Y ] = 0 . This fact can be verified by calculation, if desired.

Got questions? Get instant answers now!

Uniform marginal distributions

Figure two is comprised of three graphs of multiple shaded squares. All three are standard cartesian graphs, with all four quadrants equal in size, t as the horizontal axis, and u as the vertical axis. The first graph shows one large square centered at the origin with a length of two units on a side. As the square is centered about the origin, the square is divided equally into four smaller squares by the vertical and horizontal axes. A caption below the first graph reads, rho = 0. The second graph contains two smaller squares, one unit to a side, one sitting with two sides along the axes of the graph in the first quadrant, and one sitting with two sides along the axes of the graph in the third quadrant. The caption reads rho = 3/4. The third graph contains two squares of the same size as the second graph, this time with one sitting with two sides along the axes in the second quadrant, and one sitting with two sides along the axes in the fourth quadrant. The caption reads rho = -3/4. Figure two is comprised of three graphs of multiple shaded squares. All three are standard cartesian graphs, with all four quadrants equal in size, t as the horizontal axis, and u as the vertical axis. The first graph shows one large square centered at the origin with a length of two units on a side. As the square is centered about the origin, the square is divided equally into four smaller squares by the vertical and horizontal axes. A caption below the first graph reads, rho = 0. The second graph contains two smaller squares, one unit to a side, one sitting with two sides along the axes of the graph in the first quadrant, and one sitting with two sides along the axes of the graph in the third quadrant. The caption reads rho = 3/4. The third graph contains two squares of the same size as the second graph, this time with one sitting with two sides along the axes in the second quadrant, and one sitting with two sides along the axes in the fourth quadrant. The caption reads rho = -3/4.
Uniform marginals but different correlation coefficients.

Consider the three distributions in [link] . In case (a), the distribution is uniform over the square centered at the origin with vertices at (1,1), (-1,1), (-1,-1),(1,-1). In case (b), the distribution is uniform over two squares, in the first and third quadrants with vertices (0,0), (1,0), (1,1), (0,1) and (0,0),

(-1,0), (-1,-1), (0,-1). In case (c) the two squares are in the second and fourth quadrants. The marginalsare uniform on (-1,1) in each case, so that in each case

E [ X ] = E [ Y ] = 0 and Var [ X ] = Var [ Y ] = 1 / 3

This means the ρ = 1 line is u = t and the ρ = - 1 line is u = - t .

  1. By symmetry, E [ X Y ] = 0 (in fact the pair is independent) and ρ = 0 .
  2. For every pair of possible values, the two signs must be the same, so E [ X Y ] > 0 which implies ρ > 0 . The actual value may be calculated to give ρ = 3 / 4 . Since 1 - ρ < 1 + ρ , the variance about the ρ = 1 line is less than that about the ρ = - 1 line. This is evident from the figure.
  3. E [ X Y ] < 0 and ρ < 0 . Since 1 + ρ < 1 - ρ , the variance about the ρ = - 1 line is less than that about the ρ = 1 line. Again, examination of the figure confirms this.
Got questions? Get instant answers now!

A pair of simple random variables

With the aid of m-functions and MATLAB we can easily caluclate the covariance and the correlation coefficient. We use the joint distribution for Example 9 in "Variance." In that example calculations show

E [ X Y ] - E [ X ] E [ Y ] = - 0 . 1633 = Cov [ X , Y ] , σ X = 1 . 8170 and σ Y = 1 . 9122

so that ρ = - 0 . 04699 .

Got questions? Get instant answers now!

An absolutely continuous pair

The pair { X , Y } has joint density function f X Y ( t , u ) = 6 5 ( t + 2 u ) on the triangular region bounded by t = 0 , u = t , and u = 1 . By the usual integration techniques, we have

f X ( t ) = 6 5 ( 1 + t - 2 t 2 ) , 0 t 1 and f Y ( u ) = 3 u 2 , 0 u 1

From this we obtain E [ X ] = 2 / 5 , Var [ X ] = 3 / 50 , E [ Y ] = 3 / 4 , and Var [ Y ] = 3 / 80 . To complete the picture we need

E [ X Y ] = 6 5 0 1 t 1 ( t 2 u + 2 t u 2 ) d u d t = 8 / 25

Then

Cov [ X , Y ] = E [ X Y ] - E [ X ] E [ Y ] = 2 / 100 and ρ = Cov [ X , Y ] σ X σ Y = 4 30 10 0 . 4216

APPROXIMATION

tuappr Enter matrix [a b]of X-range endpoints [0 1] Enter matrix [c d]of Y-range endpoints [0 1] Enter number of X approximation points 200Enter number of Y approximation points 200 Enter expression for joint density (6/5)*(t + 2*u).*(u>=t) Use array operations on X, Y, PX, PY, t, u, and PEX = total(t.*P) EX = 0.4012 % Theoretical = 0.4EY = total(u.*P) EY = 0.7496 % Theoretical = 0.75VX = total(t.^2.*P) - EX^2 VX = 0.0603 % Theoretical = 0.06VY = total(u.^2.*P) - EY^2 VY = 0.0376 % Theoretical = 0.0375CV = total(t.*u.*P) - EX*EY CV = 0.0201 % Theoretical = 0.02rho = CV/sqrt(VX*VY) rho = 0.4212 % Theoretical = 0.4216
Got questions? Get instant answers now!

Coefficient of linear correlation

The parameter ρ is usually called the correlation coefficient. A more descriptive name would be coefficient of linear correlation . The following example shows that all probability mass may be on a curve, so that Y = g ( X ) (i.e., the value of Y is completely determined by the value of X ), yet ρ = 0 .

Y = g ( X ) But ρ = 0

Suppose X uniform (-1,1), so that f X ( t ) = 1 / 2 , - 1 < t < 1 and E [ X ] = 0 . Let Y = g ( X ) = cos X . Then

Cov [ X , Y ] = E [ X Y ] = 1 2 - 1 1 t cos t d t = 0

Thus ρ = 0 . Note that g could be any even function defined on (-1,1). In this case the integrand t g ( t ) is odd, so that the value of the integral is zero.

Got questions? Get instant answers now!

Variance and covariance for linear combinations

We generalize the property (V4) on linear combinations. Consider the linear combinations

X = i = 1 n a i X i and Y = j = 1 m b j Y j

We wish to determine Cov [ X , Y ] and Var [ X ] . It is convenient to work with the centered random variables X ' = X - μ X and Y ' = Y - μ y . Since by linearity of expectation,

μ X = i = 1 n a i μ X i and μ Y = j = 1 m b j μ Y j

we have

X ' = i = 1 n a i X i - i = 1 n a i μ X i = i = 1 n a i ( X i - μ X i ) = i = 1 n a i X i '

and similarly for Y ' . By definition

Cov ( X , Y ) = E [ X ' Y ' ] = E [ i , j a i b j X i ' Y j ' ] = i , j a i b j E [ X i ' Y j ' ] = i , j a i b j Cov ( X i , Y j )

In particular

Var ( X ) = Cov ( X , X ) = i , j a i a j Cov ( X i , X j ) = i = 1 n a i 2 Cov ( X i , X i ) + i j a i a j Cov ( X i , X j )

Using the fact that a i a j Cov ( X i , X j ) = a j a i Cov ( X j , X i ) , we have

Var [ X ] = i = 1 n a i 2 Var [ X i ] + 2 i < j a i a j Cov ( X i , X j )

Note that a i 2 does not depend upon the sign of a i . If the X i form an independent class, or are otherwise uncorrelated, the expression for variance reduces to

Var [ X ] = i = 1 n a i 2 Var [ X i ]

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Applied probability. OpenStax CNX. Aug 31, 2009 Download for free at http://cnx.org/content/col10708/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Applied probability' conversation and receive update notifications?

Ask