<< Chapter < Page Chapter >> Page >

Programming languages and systems for parallel computers are equally diverse, as the following list shows.

  • Libraries can encapsulate parallel operations. Libraries of synchronization and communication primitives are especially useful. For example, the MPI library provides the send and receive operations needed for a CSP-like message passing model. When such libraries are called from sequential languages, there must be a convention for starting parallel operations. The most common way to do this is the Single Program Multiple Data (SPMD) paradigm, in which all processors execute the same program text.
  • Extensions to sequential languages can follow a particular model of parallel computation. For example, OpenMP reflects a PRAM-like shared memory model. High Performance Fortran (HPF) ( see also ) was based on a data-parallel model popularized by CM Fortran . Common extensions include parallel loops (often called “ forall ” to evoke “ for ” loops) and synchronization operations.
  • Entire new languages can incorporate parallel execution at a deep level, in forms not directly related to the programming models mentioned above. For example, Cilk is a functional language that expresses parallel operations by independent function calls. Sisal was based on the concept of streams, in the elements in a series (stream) of data could be processed independently. Both languages have been successfully implemented on a variety of platforms, showing the value of a non-hardware-specific abstraction.
  • Other languages more directly reflect a parallel architecture, or a class of such architectures. For example, Partitioned Global Address Space (PGAS) languages like Co-Array Fortran , Chapel , Fortress , and X10 consider memory to be shared, but each processor can access its own memory much faster than other processors’ memory. This is similar to many non-uniform shared memory architectures in use today. CUDA is a rather different parallel language designed for programming on GPUs. It features explicit partitioning of the computation between the host (i.e. the controller) and the “device” (i.e. GPU processors). Although these languages are to some extent hardware-based, they are general enough that implementations on other platforms are possible.

Neither of the lists above is in any way exhaustive. The Open Education Cup welcomes entries about any aspect of expressing parallel computation. This includes descriptions of parallel models, translations between models, descriptions of parallel languages or programming systems, implementations of the languages, and evaluations of models or systems.

Parallel algorithms and applications

Unlike performance improvements due to increased clock speed or better compilers, running faster on parallel architectures doesn’t just happen. Instead, parallel algorithms have to be devised to take advantage of multiple processors, and applications have to be updated to use those algorithms. The methods (and difficulty) of doing this vary widely. A few examples show the range of possibilities.

Practice Key Terms 5

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, 2008-'09 open education cup: high performance computing. OpenStax CNX. Oct 28, 2008 Download for free at http://cnx.org/content/col10594/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the '2008-'09 open education cup: high performance computing' conversation and receive update notifications?

Ask