<< Chapter < Page Chapter >> Page >

All of these considerations (and more) can be formalized, quantified, and studied.

The Open Education Cup will accept entries relevant to any of the above architectures (except SISD). This might include case studies of parallel computers, comparisons of architectural approaches, design studies for future systems, or parallel computing hardware issues that we do not touch on here.

Parallel programming models and languages

Parallel computers require software in order to produce useful results. Writing that software requires a means of expressing it (that is, a language), which must in turn be based on an idea of how it will work (that is, a model). While it is possible to write programs specific to a given parallel architecture, many would prefer a more general model and language, just as most sequential programmers use a general language rather than working directly with assembly code.

Unfortunately, there is no single widely-accepted model of parallel computation comparable to the sequential Random Access Machine (RAM) model. In part, this reflects the diversity of parallel computer architectures and their resulting range of operations and costs. The following list gives just a few examples of the models that have been suggested.

  • Communicating Sequential Processes (CSP) . This is a theoretical model of concurrent (that is, parallel) processes interacting solely through explicit messages. In that way, it is a close model of nonshared memory architectures . As the CSP model has developed , however, its use has emphasized validation of system correctness rather than development of algorithms and programs.
  • Parallel Random Access Machine (PRAM) . In this model, all processors operate asynchronously and have constant-time access to shared memory. This is similar to a shared memory architecture , and is therefore useful in describing parallel algorithms for such machines. However, PRAM abstracts actual shared memory architectures by assuming that all memory can be accessed at equal cost, which is generally not true in practice.
  • Bulk Synchronous Processes (BSP) . In this model, all processors operate asynchronously and have direct access to only their own memory. Algorithms proceed in a series of “supersteps” consisting of local computation, communication exchanges, and barrier synchronizations (where processors wait at the barrier for all others to arrive). This can model either shared or non-shared memory MIMD architectures , but simplifies many issues in organizing the communication and synchronization.
  • LogP (for Latency, overhead, gap, Processors) . This model can be thought of as a refinement of BSP. LogP allows a more detailed treatment of architectural constraints such as interconnect network capacity. It also allows more detailed scheduling of communication, including overlap of computation with communication. LogP can model shared- and non-shared memory MIMD machines, but abstracts the specific topology of the network. (The capitalization is a pun on O ( log P ) size 12{O \( "log"P \) } {} theoretical bounds).
Practice Key Terms 5

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, 2008-'09 open education cup: high performance computing. OpenStax CNX. Oct 28, 2008 Download for free at http://cnx.org/content/col10594/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the '2008-'09 open education cup: high performance computing' conversation and receive update notifications?

Ask