2. Sending and Receiving messages.
To whom is
data sent?
What is sent?
How does the receiver identify it?
Current Message-Passing
A typical blocking send looks likeThe Buffer
send( dest, type, address, length )where
-- dest is an integer identifier representing the process to receive the message.
-- type is a nonnegative integer that the destination can use to selectively screen messages.
-- ( address, length) describes a contiguous area in memory containing the message to be sent.
A typical global operation looks like:
broadcast( type, address, length )
All of these specifications are a good match to hardware, easy to understand, but too inflexible.
Sending and receiving only a contiguous array of bytes:
hides the real data structure from hardware which might be able to handle it directly
requires pre-packing dispersed data
-- rows of a matrix stored columnwise
-- general collections of structuresprevents communications between machines with different representations (even lengths) for same data type.
Specified in MPI by starting address, datatype, and count, where datatype is:
-- elementary (all C and Fortran datatypes)
-- contiguous array of datatypes
-- strided blocks of datatypes
-- indexed array of blocks of datatypes
-- general structureDatatypes are constructed recursively.
Specifications of elementary datatypes allows heterogeneous communication.
Elimination of length in favor of count is clearer.
Specifying application-oriented layout of data allows maximal use of special hardware.
Basic Terms in MPI routines:
buf
: message (data or instructions) to be communicated
comm : communicator (a defined group
of processors)
count : dimensionality of message
datatype : variable type of message
dest : receving
processor
ierror : type of error
op :
operation
root : master processor
source : delivering processor
status : process status
tag : type
or index of message
MPI Basic Send/Receive
The basic (blocking) send:
MPI_Send( start, count, datatype, dest, tag, comm )C routine:
MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
Fortran routine:
MPI_Send(buf, count, datatype, dest, tag, comm, ierror)
The basic (blocking) receive:
MPI_Recv(start, count, datatype, source, tag, comm, status)C routine:
MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status status)
Fortran routine:
MPI_Recv(buf, count, datatype, source, tag, comm, status, ierror)
The source, tag, and count of the message actually received can be retrieved from status.
Broadcast and Reduction
The routine MPI_Bcast sends data from one process to all others.C routine:
MPI_Bcast(void* buf, int count, MPI_Datatype datatype, int root, MPI_Comm comm)
Fortran routine:
MPI_Bcast(buf, count, datatype, root, comm, ierror)
The routine MPI_Reduce combines data from all processes (by adding them in this case), and returning the result to a single process.
C routine:
MPI_Reduce(void* sendbuf, void* recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm)
Fortran routine:
MPI_Reduce(sendbuf, recvbuf, count, datatype, op, root, comm, ierror)
Getting information about a
message
MPI_Status *status;
MPI_Datatype datatype;
integer *count
MPI_Recv( ..., &status );
MPI_Get_count( &status, datatype, &count );
st_source = status(MPI_SOURCE)
st_tag = status(MPI_TAG)
MPI_Get_count may be used to determine how much data of a particular type was received.C routine:
MPI_Get_count(MPI_Status *status, MPI_Datatype datatype, int *count)
Fortran routine:
MPI_Get_count(status, datatype, count, ierror)
Sample C codes: