A distributed computing system consists of multiple autonomous processors
that do not share primary memory, but cooperate by communications over
network.
Message Passing Interface.
A library which can be called from C, C++ and Fortran 77, used to implement
message-passing model on distributed system, hopefully with efficiency,
portability, and functionality.
Runing a C program on our Department Falcon clusters:
Add /usr/local/mpi/bin to your path by
adding the following line to either .cshrc or
.login file:
set path=($path . /usr/local/bin
/usr/local/mpi/bin)
Create a .rhosts file in your home directory
with the following content:
falcon2 your_user_name
falcon3 your_user_name
.....
falcon20 your_user_name
Then excute: chmod 600 .rhosts
Commands to run a program (Example: your_source_code.c)
mpicc your_source_code.c
-lm
mpirun -np 4 (# of
processors) a.out
MPI_Init
Initialize MPI
MPI_Comm_size
Find out how many processes there are
MPI_Comm_rank
Find out which processor I am
MPI_Bcast
Send data to all other processors
MPI_Reduce
Collect and further process data from all processors
MPI_Send
Send a message
MPI_Recv
Recieve a message
MPI_Wtime
Give the timing of a process
MPI_Finalize
Terminate MPI
MPI_Init( &argc, &argv )
MPI_Comm_size(MPI_COMM_WORLD, &numtasks)
MPI_Comm_rank(MPI_COMM_WORLD, &rank)
MPI_Bcast(void* buf, int count, MPI_Datatype datatype,
int root, MPI_Comm
comm)
MPI_Reduce(void* sendbuf, void* recvbuf, int count, MPI_Datatype
datatype,
MPI_Op op, int root, MPI_Comm comm)
MPI_Send(void* buf, int count, MPI_Datatype datatype, int
dest, int tag,
MPI_Comm comm)
MPI_Recv(void* buf, int count, MPI_Datatype datatype, int
source, int tag,
MPI_Comm comm, MPI_Status status)
starttime = MPI_Wtime()
MPI_Finalize()
int main(int argc, cgar argv[] )
{
int n, myid, numprocs, i;
double myq q, h, sum, x;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,
&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,
&&myid);
while (1) {
if (myid==0){
printf("Enter the number of intervals: (0 quits) ");
scanf("%d", &n);
}
MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (n==0)
break;
else {
h = 1.0 / (double) n;
sum = 0.0
for (i= myid + 1; i <= n; i += numprocs) {
x = h * ((double)i - 0.5);
sum += sqrt( 1.0 - x*x)
}
myq = h * sum
MPI_Reduce(&myq, &q, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
if (myid == 0)
printf("q is approximately %.16f, q);
}
}
MPI_Finalize();
return 0;
}