<< Chapter < Page Chapter >> Page >

Increase x by 1 and repeat the selection process for the step from t = - 2 to t = - 1 , again selecting k arrows, one arrow leaving each state from t = - 2 . You must not re-select arrows for the steps already completed in previous runs (here, the previous run covered the step from t = - 1 to t = 0 ) because you will introduce a bias that favors the arrows with larger probabilities. Instead, you must reuse them for every successive run of the CFTP process. Now, starting at each of the states of your state set, follow the k compositions of steps from t = - 2 to t = 0 and check for coalescence. Start new runs of CFTP, increasing x by 1 for each run, until coalescence occurs (all k histories from t = - ( 2 x ) to t = 0 return the same state at t = 0 ). Output the state at t = 0 that all histories return. Propp and Wilson have proven that this outputted state has been sampled according to the steady state distribution of the MC [link] . See [link] , which shows three runs of CFTP for the MC from [link] .

To simplify the task of choosing k arrows per step of CFTP, we can instead make only one decision by selecting a random variable which we will call u t . The random variable may be represented in the following manner. Let u t be a real number in the interval [ 0 , 1 ] with uniform distribution (every value of u t has the same probability of being chosen). Since the sum of all probabilities on the arrows exiting one state also add to 1, arrange these probabilities in some specific order on the interval [ 0 , 1 ] . For each step of CFTP, select a value for u t and where this value lands in the interval [ 0 , 1 ] dictates which arrows will be chosen leaving each of the k states.

Monotonicity

If k is very large, the CFTP algorithm would require lots of work, since you have to follow all k histories and look for coalescence of all k of these (making the algorithm acutally “k-tupling" from the past). To greatly reduce the number of histories that must be followed, Propp and Wilson prove that if the stationary distribution of the MC is “monotonic" with respect to a specified partial ordering of the states, you need only to follow two histories (true “coupling"). If a stationary distribution is monotonic, the partial ordering of the state set is preserved under steps of the MC. This means that we may simply follow two histories: that of the largest state and that of the smallest state. Then with the monotonic distribution, the coalescence of these two histories implies the coalescence of all k histories [link] .

Heat bath algorithm&Monotonicity

To discuss the concept of a heat bath algorithm, we will first define a spin system on a vertex set, V , to be the set of all spin configurations σ . Each configuration σ corresponds to a set of vertices in which each vertex i V is assigned either an “up” or a “down” spin. Each σ is given a probability, π ( σ ) . For a configuration σ , σ i and σ i denote the configurations which agree with σ for all j V where j i . Let σ i denote the configuration that agrees with σ for all vertices j V , j i and has i spinning up and σ i denote the configuration that agrees with σ for all j V , j i and has i spinning down.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, The art of the pfug. OpenStax CNX. Jun 05, 2013 Download for free at http://cnx.org/content/col10523/1.34
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'The art of the pfug' conversation and receive update notifications?

Ask