Quantcast
Channel: High Performance Computing Virtual Laboratory - Parallel Programming
Viewing all articles
Browse latest Browse all 8

How do I parallelize my code with MPI?

$
0
0

A very simple example of how to parallelize code with MPI is given in the monte.f Fortran program.

Only a few MPI commands are necessary to parallelize this Monte-Carlo calculation of pi. The first

 call MPI_INIT(ierr) 

sets up the MPI system and has to be called in any MPI program. The next two

 call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) 
call MPI_COMM_SIZE(MPI_COMM_WORLD, np, ierr)

are used to determine the "rank", i.e. number of the presently running process, and the total number of processes running (size). The identifier MPI_COMM_WORLD is used to label a group of processes assigned to this task, called a "communicator". With

 call MPI_REDUCE(pi,pisum,1,MPI_DOUBLE_PRECISION,& 
MPI_SUM,0,MPI_COMM_WORLD,ierr)

the partial sums (pi) from the different processes are summed up (reduced) into the total (pisum). This is done simultaneously with the gathering of the results from the processes, and is called "reduction". Finally,

 call MPI_FINALIZE(ierr)

closes the MPI system.

To get an idea of how to use MPIand what the various routines do, check out the MPI workshop at the Maui HPC Centre site. For a list of routines in the MPIstandard, and a reference manual of their usage, go to the Sun Documentation Website and search for theSun MPI Programming and Reference Guide .

We offer a separate MPI FAQ with more information about this system.

Although the MPI standard comprises hundreds of routines, you can write very stable and scalable code with only a dozen or so routines. In fact, often the simpler you keep it the better it will work.


Viewing all articles
Browse latest Browse all 8

Trending Articles