<< Chapter < Page Chapter >> Page >

Upper division under-graduate college students

The challenge is to try parallel computing, not just talk about it.

During the week of May 21st to May 26th in 2006, this author attended a workshop on Parallel and Distributed Computing.  The workshop was given by the National Computational Science Institute and introduced parallel programming using multiple computers (a group of micro computers grouped or clustered into a super-micro computer).  The conference emphasized several important points related to the computer industry:

  1. During the past few years super-micro computers have become more powerful and more available.
  2. Desk top computers are starting to be built with multiple processors (or cores) and we will have multiple (10 to 30) core processors within a few years.
  3. Use of super-micro computing power is wide spread and growing in all areas: scientific research, engineering applications, 3D animation for computer games and education, etc.
  4. There is a shortage of educators, scientific researchers, and computer professionals that know how to manage and utilize this developing resource. Computer professionals needed include: Technicians that know how to create and maintain a super-micro computer; and Programmers that know how to create computer applications that use parallel programming concepts .

This last item was emphasized to those of you beginning a career in computer programming that as you progress in your education, you should be aware of the changing nature of computer programming as a profession.  Within a few years all professional programmers will have to be familiar with parallel programming .

During the conference this author wrote a program that sorts an array of 150,000 integers using two different approaches.  The first way was without parallel processing.  When it was compiled and executed using a single machine, it took 120.324 seconds to run (2 minutes).  The second way was to redesign the program so parts of it could be run on several processors at the same time.  When it was compiled and executed using 11 machines within a cluster of micro-computers, it took 20.974 seconds to run.  That’s approximately 6 times faster.  Thus, parallel programming will become a necessity to be able to utilize the multi-processor hardware of the near future.

A distributed computing environment was set up in a normal computer lab using a Linix operating system stored on a CD. After booting several computers with the CD, the computers can communicate with each other with the support of "Message Passing Interface" or MPI commands. This model known as the Bootable Cluster CD (BCCD) is available from:

Bootable Cluster CD – University of Northern Iowa at: (External Link)

The source code files used during the above workshop were modified to a version 8, thus an 8 is in the filename. The non-parallel processing "super" code was named: nonps8.cpp with the parallel processing "super" code named: ps8.cpp (Note: The parallel processing code contains some comments that describe that part of the code being run by a machine identified as the "SERVER_NODE"  with a part of the code being run by the 10 other machines (the Clients).  The client machines communicate critical information to the server node using "Message Passing Interface" or MPI commands.)

You may need to right click on the link and select "Save Target As" in order to download these source code files.

Download the source code file from Connexions: nonps8.cpp

Download the source code file from Connexions: ps8.cpp

Two notable resources with super computer information were provided by presenters during the workshop:

Oklahoma University – Supercomputing Center for Education&Research at: (External Link)

Contra Costa College – High Performance Computing at: (External Link)

You can also "Google" the topic's key words and spend several days reading and experimenting with High Performance Computing.

Consider reviewing the "Educator Resources" links provided in the next section.

Educator resources

There are many sites that provide materials and assistance to those teaching the many aspects of High Performance Computing. A few of them are:

Shodor – A National Resource for Computational Science Education at: (External Link)

CSERD – Computational Science Education Reference Desk at: (External Link)

National Computational Science Institute at: (External Link)

Association of Computing Machinery at: (External Link)

Super Computing – Education at: (External Link)

Simple definitions

high performance computing
Grouping multiple computers or multiple computer processors to accomplish a task in less time.
sequential processing
Using only one processor and completing the tasks in a sequential order.
parallel processing
Dividing a task into parts that can utilize more than one processor.
central processing unit
The electronic circuitry that actually executes computer instructions.
parallel programming
Involves developing programs that utilize parallel processing algorithms that take advantage of multiple processors.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Programming fundamentals - a modular structured approach using c++. OpenStax CNX. Jan 10, 2013 Download for free at http://cnx.org/content/col10621/1.22
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Programming fundamentals - a modular structured approach using c++' conversation and receive update notifications?

Ask