<< Chapter < Page Chapter >> Page >
  • Data parallelism – updating many elements of a data structure simultaneously, like a image processing program that smoothes out pixels
  • Pipelining – using separate processors on different stages of a computation, allowing several problems to be “in the pipeline” at once
  • Task parallelism – assigning different processors to different conceptual tasks, such as a climate simulation that separates the atmospheric and oceanic models
  • Task farms – running many similar copies of a task for later combining, as in Monte Carlo simulation or testing design alternatives
  • Speculative parallelism – trying alternatives that may not be necessary to pre-compute possible answers, as in searching for the best move in a game tree

However, doing this may conflict with traditional sequential thinking. For example, a speculative program may do more total computation than its sequential counterpart, albeit more quickly. As another example, task-parallel programs may repeat calculations to avoid synchronizing and moving data. Moreover, some algorithms (such as depth-first search) are theoretically impossible to execute in parallel, because each step requires information from the one before it. These must be replaced with others (such as breadth-first search) that can run in parallel. Other algorithms require unintuitive changes to execute in parallel. For example, consider the following program:

for i = 2 to N a[i]= a[i] + a[i-1]end

One might think that this can only be computed sequentially. After all, every element a[i] depends on the one computed before it. But this is not so. We leave it to other module-writers to explain how the elements a[i] can all be computed in log N size 12{"log" left (N right )} {} data-parallel steps; for now, we simply note that the parallel program is not a simple loop. In summary, parallel computing requires parallel thinking, and this is very different from the sequential thinking taught to our computer and computational scientists.

This brings us to the need for new, different, and hopefully better training in parallel computing. We must train new students to embrace parallel computing if the field is going to advance. Equally, we must retrain programmers working today if we want new programs to be better than today. To that end, the Open Education Cup is soliciting educational modules in five areas at the core of parallel computing:

  • Parallel Architectures: Descriptions of parallel computers and how they operate, including particular systems (e.g. multicore chips) and general design principles (e.g. building interconnection networks). More information about this area is in the Parallel Architectures section .
  • Parallel Programming Models and Languages: Abstractions of parallel computers useful for programming them efficiently, including both machine models (e.g. PRAM or LogP) and languages or extensions (e.g. OpenMP or MPI). More information about this area is in the Parallel Programming Models and Languages section .
  • Parallel Algorithms and Applications: Methods of solving problems on parallel computers, from basic computations (e.g. parallel FFT or sorting algorithms) to full systems (e.g. parallel databases or seismic processing). More information about this area is in the Parallel Algorithms and Applications section .
  • Parallel Software Tools: Tools for understanding and improving parallel programs, including parallel debuggers (e.g. TotalView) and performance analyzers (e.g. HPCtoolkit and TAU Performance System). More information about this area is in the Parallel Software Tools section .
  • Accelerated Computing: Hardware and associated software can add parallel performance to more conventional systems, from Graphics Processing Units (GPUs) to Field Programmable Gate Arrays (FPGAs) to Application-Specific Integrated Circuits (ASICs) to innovative special-purpose hardware. More information about this area is in the Accelerated Computing section .
Practice Key Terms 5

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, 2008-'09 open education cup: high performance computing. OpenStax CNX. Oct 28, 2008 Download for free at http://cnx.org/content/col10594/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the '2008-'09 open education cup: high performance computing' conversation and receive update notifications?

Ask