To use MPI on our clusters, you will have to do the following things:
- Include header files on the top of all subroutines that use MPI, i.e.
for Fortran INCLUDE 'mpif.h' and for C#include <mpi.h>
This is important for the definition of variables and constants that are used by theMPI system. - Compile and link with the following flags:
-I/opt/SUNWhpc/include -L/opt/SUNWhpc/lib -R/opt/SUNWhpc/lib -lmpi
These tell the compiler, linker and runtime environment where to look for include files, static libraries and runtime dynamic libraries. The command -lmpi loads theMPI library. - Alternatively to the above flags, you can use the
tmf90, tmcc, or tmCC
macros for Fortran, C, and C++, respectively, instead of the standard compilers/linkers. These will automatically call the right flags. It also implies usage of the -lmpi library flag. - For running MPI programs, a special multi-processor runtime environment is needed. This allows you to specify how many processes are used for the execution of the program, from which pool of processes they should be taken, etc. The most important command is
mpirun [options]
where options specify the parameters of the run.
The mpirun command is part of the ClusterTools programming environment, and is necessary to run MPI programs and allocate the separate processes across the multi-processor system. The setup for ClusterTools is part of the default on our cluster. The/opt/SUNWhpc/bin directory must be in your PATH (which it is for the default environment).
mpirun lets you specify the number of processors, e.g.
mpirun -np 4 test_par
runs the MPI program test_par on 4 processors. There is a myriad of other options for this command, many of which are concerned with details of process allocation that are automatically handled by the system on HPCVL clusters, and do therefore not have to concern the user.
For help on ClusterTools, consult Sun's Documentation Site and search for HPC Cluster Tools User's Guide.