<< Chapter < Page Chapter >> Page >

Let:

  • h = fraction of time that a reference does not require a page fault.
  • tmem = time it takes to read a word from memory.
  • tdisk = time it takes to read a page from disk.

then

  • EAT = h * tmem + (1 - h) * tdisk.

If there a multiple classes of memory accesses, such as no disk access, one disk access, and two disk access, then you wouldhave a fraction (h) and access time (t) for each class of access.

Note that this calculation is the same type that computer architects use to calculate memory performance. In that case, theiraccess classes might be (1) cached in L1, (2) cached in L2, and (3) RAM.

Page selection and replacement

Once the hardware has provided basic capabilities for virtual memory, the OS must make two kinds of scheduling decisions:

  • Page selection: when to bring pages into memory.
  • Page replacement: which page(s) should be thrown out, and when.

Page selection Algorithms:

  • Demand paging: start up process with no pages loaded, load a page when a page fault for it occurs, i.e. until it absolutely MUST be inmemory. Almost all paging systems are like this.
  • Request paging: let user say which pages are needed. The trouble is, users do not always know best, and are not always impartial. They willoverestimate needs.
  • Prepaging: bring a page into memory before it is referenced (e.g. when one page is referenced, bring in the next one, just in case). Hard todo effectively without a prophet, may spend a lot of time doing wasted work.

Page Replacement Algorithms:

  • Random: pick any page at random (works surprisingly well!).
  • FIFO: throw out the page that has been in memory the longest. The idea is to be fair, give all pages equal residency.
  • MIN: naturally, the best algorithm arises if we can predict the future.
  • LFU: use the frequency of past references to predict the future.
  • LRU: use the order of past references to predict the future.

Example: Try the reference string A B C A B D A D B C B, assume there are three page frames of physical memory. Show the memoryallocation state after each memory reference.

Note that MIN is optimal (cannot be beaten), but that the principle of locality states that past behavior predicts futurebehavior, thus LRU should do just about as well.

Implementing LRU: need some form of hardware support, in order to keep track of which pages have been used recently.

  • Perfect LRU? Keep a register for each page, and store the system clock into that register on each memory reference. To replace a page, scanthrough all of them to find the one with the oldest clock. This is expensive if there are a lot of memory pages.
  • In practice, nobody implements perfect LRU. Instead, we settle for an approximation which is efficient. Just find an old page, not necessarilythe oldest. LRU is just an approximation anyway (why not approximate a little more?).

Clock algorithm, thrashing

This is an efficient way to approximate LRU.

Clock algorithm: keep "use" bit for each page frame, hardware sets the appropriate bit on every memory reference. The operatingsystem clears the bits from time to time in order to figure out how often pages are being referenced. Introduce clock algorithm where to find a page to throwout the OS circulates through the physical frames clearing use bits until one is found that is zero. Use that one. Show clock analogy.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Operating systems. OpenStax CNX. Aug 13, 2009 Download for free at http://cnx.org/content/col10785/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Operating systems' conversation and receive update notifications?

Ask