1MPI_Sendrecv(3)                     LAM/MPI                    MPI_Sendrecv(3)
2
3
4

NAME

6       MPI_Sendrecv -  Sends and receives a message
7

SYNOPSIS

9       #include <mpi.h>
10       int MPI_Sendrecv(void *sbuf, int scount, MPI_Datatype sdtype,
11                       int dest, int stag, void *rbuf, int rcount,
12                       MPI_Datatype rdtype, int src, int rtag,
13                       MPI_Comm comm, MPI_Status *status)
14

INPUT PARAMETERS

16       sbuf   - initial address of send buffer (choice)
17       scount - number of elements in send buffer (integer)
18       sdtype - type of elements in send buffer (handle)
19       dest   - rank of destination (integer)
20       stag   - send tag (integer)
21       rcount - number of elements in receive buffer (integer)
22       rdtype - type of elements in receive buffer (handle)
23       src    - rank of source (integer)
24       rtag   - receive tag (integer)
25       comm   - communicator (handle)
26
27

OUTPUT PARAMETERS

29       rbuf   - initial address of receive buffer (choice)
30       status - status object (Status).  This refers to the receive operation.
31              Can also be the MPI constant MPI_STATUS_IGNORE , if  the  return
32              status is not desired.
33
34

NOTES

36       To  dispell  a common misconception: src and dest do not have to be the
37       same.  Additionally, a common mistake when using this  function  is  to
38       mismatch  the  tags  with  the  source and destination ranks, which can
39       result in deadlock.
40
41       This function is guaranteed not to deadlock in situations  where  pairs
42       of  blocking sends and receives may deadlock.  For example, the follow‐
43       ing code may deadlock if all ranks in MPI_COMM_WORLD execute it  simul‐
44       taneously
45
46       int rank, size, to, from;
47       MPI_Comm_rank(MPI_COMM_WORLD, &rank);
48       MPI_Comm_size(MPI_COMM_WORLD, &size);
49       to = (rank + 1) % size;
50       from = (rank + size - 1) % size;
51       MPI_Send(send_buffer, ..., to, tag, MPI_COMM_WORLD);
52       MPI_Recv(recv_buffer, ..., from, tag, MPI_COMM_WORLD);
53
54
55       If  even  one  rank's  MPI_Send  blocks and never completes, the entire
56       operation may deadlock.  One alternative is to use MPI_Sendrecv in this
57       situation because it is guaranteed not to deadlock.
58
59

NOTES FOR FORTRAN

61       All  MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK ) have
62       an additional argument ierr at the end of the argument list.   ierr  is
63       an  integer and has the same meaning as the return value of the routine
64       in C.  In Fortran, MPI routines are subroutines, and are  invoked  with
65       the call statement.
66
67       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in
68       Fortran.
69
70

ERRORS

72       If an error occurs in an MPI function, the current MPI error handler is
73       called  to  handle  it.   By default, this error handler aborts the MPI
74       job.  The error handler may be changed with  MPI_Errhandler_set  ;  the
75       predefined  error  handler MPI_ERRORS_RETURN may be used to cause error
76       values to be returned (in C and Fortran; this  error  handler  is  less
77       useful  in  with  the  C++  MPI bindings.  The predefined error handler
78       MPI::ERRORS_THROW_EXCEPTIONS should be used in C++ if the  error  value
79       needs  to  be recovered).  Note that MPI does not guarantee that an MPI
80       program can continue past an error.
81
82       All MPI routines (except MPI_Wtime and  MPI_Wtick  )  return  an  error
83       value;  C routines as the value of the function and Fortran routines in
84       the last argument.  The C++ bindings for MPI do not return  error  val‐
85       ues;  instead,  error values are communicated by throwing exceptions of
86       type MPI::Exception (but not by default).  Exceptions are  only  thrown
87       if the error value is not MPI::SUCCESS .
88
89
90       Note  that  if  the MPI::ERRORS_RETURN handler is set in C++, while MPI
91       functions will return upon an error, there will be no  way  to  recover
92       what the actual error value was.
93       MPI_SUCCESS
94              - No error; MPI routine completed successfully.
95       MPI_ERR_COMM
96              -  Invalid communicator.  A common error is to use a null commu‐
97              nicator in a call (not even allowed in MPI_Comm_rank ).
98       MPI_ERR_COUNT
99              - Invalid count argument.  Count arguments must be non-negative;
100              a count of zero is often valid.
101       MPI_ERR_TYPE
102              - Invalid datatype argument.  May be an uncommitted MPI_Datatype
103              (see MPI_Type_commit ).
104       MPI_ERR_TAG
105              - Invalid tag argument.  Tags must be non-negative;  tags  in  a
106              receive  ( MPI_Recv , MPI_Irecv , MPI_Sendrecv , etc.)  may also
107              be MPI_ANY_TAG .  The largest tag value is available through the
108              the attribute MPI_TAG_UB .
109
110       MPI_ERR_RANK
111              -  Invalid  source  or  destination rank.  Ranks must be between
112              zero and the size of the communicator  minus  one;  ranks  in  a
113              receive  (  MPI_Recv , MPI_Irecv , MPI_Sendrecv , etc.) may also
114              be MPI_ANY_SOURCE .
115
116
117

SEE ALSO

119       MPI_Sendrecv_replace
120
121

MORE INFORMATION

123       For more information, please see the official MPI Forum web site, which
124       contains  the  text of both the MPI-1 and MPI-2 standards.  These docu‐
125       ments contain detailed information about each  MPI  function  (most  of
126       which is not duplicated in these man pages).
127
128       http://www.mpi-forum.org/
129
130
131

ACKNOWLEDGEMENTS

133       The  LAM Team would like the thank the MPICH Team for the handy program
134       to generate man pages  ("doctext"  from  ftp://ftp.mcs.anl.gov/pub/sow‐
135       ing/sowing.tar.gz  ), the initial formatting, and some initial text for
136       most of the MPI-1 man pages.
137

LOCATION

139       sendrecv.c
140
141
142
143LAM/MPI 7.1.2                      2/23/2006                   MPI_Sendrecv(3)
Impressum