Parallel Computing on Distributed Systems using MPI
Parallel Computing:
A Second Example:
int main( argc, argv )
int argc;
char **argv;
{
int rank, size;
MPI_Init( &argc, &argv
);
MPI_Comm_rank( MPI_COMM_WORLD,
&rank );
MPI_Comm_size( MPI_COMM_WORLD,
&size );
printf( "Hello world! I'm %d of
%d\n",
rank, size );
MPI_Finalize();
return 0;
}
These sample programs have been kept as simple as possible by assuming that all processes can do output. Not all parallel systems provide this feature, and MPI provides a way to handle this case.
Summary of basic MPI routines (functions):How to run a program (on UNIX system)?MPI_Init Initialize MPI
MPI_Comm_size Find out how many processes there are
MPI_Comm_rank Find out which processor I am
MPI_Bcast Send data to all other processors
MPI_Reduce collect and further process data from all processors
MPI_Send Send a message
MPI_Recv Recieve a message
MPI_Wtime Give the timing of a process
MPI_Finalize Terminate MPI
Runing a C program on our Department Falcon clusters:
falcon2 your_user_nameThen excute: chmod 600 .rhosts
falcon3 your_user_name
.....
falcon20 your_user_name
mpicc your_source_code.c -lm
mpirun -np 4 (# of processors) a.out
General Procedure:
For simple programs, special compiler commands can be used. For large projects, it is best to use a standard Makefile.The MPICH implementation of MPI provides the commands mpicc and mpif77 as well as Makefile examples in:
/usr/local/mpi/examples/Makefile.inNote: There are many implementations of MPI. MPICH is a freely available and portable implementation.
The commands
mpicc -o first first.cmay be used to build simple programs when using MPICH.
mpif77 -o firstf firstf.fThese provide special options that exploit the profiling features of MPI
-mpilog
Generate log files of MPI calls
-mpitrace
Trace execution of MPI calls
-mpianim
Real-time animation of MPI (not available on all systems)There are specific to the MPICH implementation; other implementations may provide similar commands (e.g., mpcc and mpxlf on IBM SP2).
Using Makefiles:
The file Makefile.in is a template Makefile. The program (script) mpireconfig translates this to a Makefile for a particular system. This allows you to use the same Makefile for a network of workstations and a massively parallel computer, even when they use different compilers, libraries, and linker options.
mpireconfig MakefileNote that you must have mpireconfig in your PATH.
A Sample Makefile.in fileRunning MPI programs:
mpirun -np 2 hellompirun is not part of the standard, but some version of it is common with several MPI implementations. The version shown here is for the MPICH implementation of MPI.Just as Fortran does not specify how Fortran programs are started, MPI does not specify how MPI programs are started.
The option -t shows the commands that mpirun would execute; you can use this to find out how mpirun starts programs on your system. The option -help shows all options to mpirun.
Finding out about the environment:
Two of the first questions asked in a parallel program are: How many processes are there? and Who am I?
How many is answered with MPI_Comm_size and who am I is answered with MPI_Comm_rank.
The rank is a number between zero and size-1.
Homework1