<< Chapter < Page Chapter >> Page >

The pair { X , Y } ci | H . X exponential ( u / 3 ) , given H = u ; Y exponential ( u / 5 ) , given H = u ; and H uniform [ 1 , 2 ] . Determine a general formula for P ( X > r , Y > s ) , then evaluate for r = 3 , s = 10 .

P ( X > r , Y > s | H = u ) = e - u r / 3 e - u s / 5 = e - a u , a = r 3 + s 5
P ( X > r , Y > s ) = e - a u f H ( u ) d u = 1 2 e - a u d u = 1 a [ e - a - e - 2 a ]
For r = 3 , s = 10 , a = 3 , P ( X > 3 , Y > 10 ) = 1 3 ( e - 3 - e - 6 ) = 0 . 0158
Got questions? Get instant answers now!

A small random sample of size n = 12 is taken to determine the proportion of the student body which favors a proposal to expand the student Honor Council by adding twoadditional members “at large.” Prior information indicates that this proportion is about 0.6 = 3/5. From a Bayesian point of view, the populationproportion is taken to be the value of a random variable H . It seems reasonable to assume a prior distribution H beta ( 4 , 3 ) , giving a maximum of the density at ( 4 - 1 ) / ( 4 + 3 - 2 ) = 3 / 5 . Seven of the twelve interviewed favor the proposition. What is the best mean-square estimate of the proportion, given this result? What is the conditionaldistribution of H , given this result?

H Beta ( r , s ) , r = 4 , s = 3 , n = 12 , k = 7

E [ H | S = k ] = k + r n + r + s = 7 + 4 12 + 4 + 3 = 11 19
Got questions? Get instant answers now!

Let { X i : 1 i n } be a random sample, given H . Set W = ( X 1 , X 2 , , X n ) . Suppose X conditionally geometric ( u ) , given H = u; i.e., suppose P ( X = k | H = u ) = u ( 1 - u ) k for all k 0 . If H uniform
on [0, 1], determine the best mean square estimator for H , given W .

E [ H | W = k ] = E [ H I { k } ( W ) ] E [ I { k } ( W ) ] = E { H E [ I { k } ( W ) | H ] } E { E [ I { k } ( W ) | H ] }
= u P ( W = k | H = u ) f H ( u ) d u P ( W = k | H = u ) f H ( u ) d u , k = ( k 1 , k 2 , , k n )
P ( W = k | H = u ) = i = 1 n u ( 1 - u ) k i = u n ( 1 - u ) k * k * = i = 1 n k i
E [ H | W = k ] = 0 1 u n + 1 ( 1 - u ) k * d u 0 1 u n ( 1 - u ) k * d u = Γ ( n + 2 ) Γ ( k * + 1 ) Γ ( n + 1 + k * + 2 ) · Γ ( n + k * + 2 ) Γ ( n + 1 ) Γ ( k * + 1 ) =
n + 1 n + k * + 2
Got questions? Get instant answers now!

Let { X i : 1 i n } be a random sample, given H . Set W = ( X 1 , X 2 , , X n ) . Suppose X conditionally Poisson ( u ) , given H = u; i.e., suppose P ( X = k | H = u ) = e - u u k / k ! . If H gamma ( m , λ ) , determine the best mean square estimator for H , given W .

E [ H | W = k ] = u P ( W = k | H = u ) f H ( u ) d u P ( W = k | H = u ) f H ( u ) d u
P ( W = k | H = u ) = i = 1 n e - u u k i k i ! = e - n u u k * A k * = i = 1 n k i
f H ( u ) = λ m u m - 1 e - λ u Γ ( m )
E [ H | W = k ] = 0 u k * + m e - ( λ + n ) u d u 0 u k * + m - 1 e - ( λ + n ) u d u = Γ ( m + k * + 1 ) ( λ + n ) k * + m + 1 · ( λ + n ) k * + m Γ ( m + k * ) = m + k * λ + n
Got questions? Get instant answers now!

Suppose { N , H } is independent and { N , Y } ci | H . Use properties of conditional expectation and conditional independence to show that

E [ g ( N ) h ( Y ) | H ] = E [ g ( N ) ] E [ h ( Y ) | H ] a . s .

E [ g ( N ) h ( H ) | H ] = E [ g ( N ) | H ] E [ h ( Y ) | H ] a . s . by (CI6) and

E [ g ( N ) | H ] = E [ g ( N ) ] a . s . by (CE5) .

Got questions? Get instant answers now!

Consider the composite demand D introduced in the section on Random Sums in "Random Selecton"

D = n = 0 I { k } ( N ) X n where X n = k = 0 n Y k , Y 0 = 0

Suppose { N , H } is independent, { N , Y i } ci | H for all i , and E [ Y i | H ] = e ( H ) , invariant with i . Show that E [ D | H ] = E [ N ] E [ Y | H ] a . s . .

E [ D | H ] = n = 1 E [ I { n } ( N ) X n | H ] a . s .
E [ I { n } ( N ) X n | H ] = k = 1 n E [ I { n } ( N ) Y k | H ] = k = 1 n P ( N = n ) E [ Y | H ] = P ( N = n ) n E [ Y | H ] a . s .
E [ D | H ] = n = 1 n P ( N = n ) E [ Y | H ] = E [ N ] E [ Y | H ] a . s .
Got questions? Get instant answers now!

The transition matrix P for a homogeneous Markov chain is as follows (in m-file npr16_07.m ):

P = 0 . 23 0 . 32 0 . 02 0 . 22 0 . 21 0 . 29 0 . 41 0 . 10 0 . 08 0 . 12 0 . 22 0 . 07 0 . 31 0 . 14 0 . 26 0 . 32 0 . 15 0 . 05 0 . 33 0 . 15 0 . 08 0 . 23 0 . 31 0 . 09 0 . 29
  1. Obtain the absolute values of the eigenvalues, then consider increasing powers of P to observe the convergence to the long run distribution.
  2. Take an arbitrary initial distribution p 0 (as a row matrix). The product p 0 * P k is the distribution for stage k . Note what happens as k becomes large enough to give convergence to the long run transition matrix. Does the endresult change with change of initial distribution p 0 ?
ev = abs(eig(P))' ev = 1.0000 0.0814 0.0814 0.3572 0.2429a = ev(4).^[2 4 8 16 24] a = 0.1276 0.0163 0.0003 0.0000 0.0000% By P^16 the rows agree to four places p0 = [0.5 0 0 0.3 0.2]; % An arbitrarily chosen p0 p4 = p0*P^4p4 = 0.2297 0.2622 0.1444 0.1644 0.1992 p8 = p0*P^8p8 = 0.2290 0.2611 0.1462 0.1638 0.2000 p16 = p0*P^16p16 = 0.2289 0.2611 0.1462 0.1638 0.2000 p0a = [0 0 0 0 1]; % A second choice of p0 p16a = p0a*P^16p16a = 0.2289 0.2611 0.1462 0.1638 0.2000
Got questions? Get instant answers now!

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Applied probability. OpenStax CNX. Aug 31, 2009 Download for free at http://cnx.org/content/col10708/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Applied probability' conversation and receive update notifications?

Ask