1MPI_Sendrecv_replace(3)             LAM/MPI            MPI_Sendrecv_replace(3)
2
3
4

NAME

6       MPI_Sendrecv_replace -  Sends and receives using a single buffer
7

SYNOPSIS

9       #include <mpi.h>
10       int MPI_Sendrecv_replace(void *buf, int count,
11                              MPI_Datatype dtype, int dest, int stag,
12                              int src, int rtag, MPI_Comm comm,
13                              MPI_Status *status)
14

INPUT PARAMETERS

16       count  - number of elements in send and receive buffer (integer)
17       dtype  - type of elements in send and receive buffer (handle)
18       dest   - rank of destination (integer)
19       stag   - send message tag (integer)
20       src    - rank of source (integer)
21       rtag   - receive message tag (integer)
22       comm   - communicator (handle)
23
24

OUTPUT PARAMETERS

26       buf    - initial address of send and receive buffer (choice)
27       status - status object (Status)
28
29

NOTES

31       To  dispell  a common misconception: src and dest do not have to be the
32       same.  Additionally, a common mistake when using this  function  is  to
33       mismatch  the  tags  with  the  source and destination ranks, which can
34       result in deadlock.
35
36       This function is guaranteed not to deadlock in situations  where  pairs
37       of  blocking sends and receives may deadlock.  For example, the follow‐
38       ing code may deadlock if all ranks in MPI_COMM_WORLD execute it  simul‐
39       taneously
40
41       int rank, size, to, from;
42       MPI_Comm_rank(MPI_COMM_WORLD, &rank);
43       MPI_Comm_size(MPI_COMM_WORLD, &size);
44       to = (rank + 1) % size;
45       from = (rank + size - 1) % size;
46       MPI_Send(send_buffer, ..., to, tag, MPI_COMM_WORLD);
47       MPI_Recv(recv_buffer, ..., from, tag, MPI_COMM_WORLD);
48
49
50       If  even  one  rank's  MPI_Send  blocks and never completes, the entire
51       operation may deadlock.  One alternative is to use MPI_Sendrecv in this
52       situation because it is guaranteed not to deadlock.
53
54

NOTES FOR FORTRAN

56       All  MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK ) have
57       an additional argument ierr at the end of the argument list.   ierr  is
58       an  integer and has the same meaning as the return value of the routine
59       in C.  In Fortran, MPI routines are subroutines, and are  invoked  with
60       the call statement.
61
62       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in
63       Fortran.
64
65

ERRORS

67       If an error occurs in an MPI function, the current MPI error handler is
68       called  to  handle  it.   By default, this error handler aborts the MPI
69       job.  The error handler may be changed with  MPI_Errhandler_set  ;  the
70       predefined  error  handler MPI_ERRORS_RETURN may be used to cause error
71       values to be returned (in C and Fortran; this  error  handler  is  less
72       useful  in  with  the  C++  MPI bindings.  The predefined error handler
73       MPI::ERRORS_THROW_EXCEPTIONS should be used in C++ if the  error  value
74       needs  to  be recovered).  Note that MPI does not guarantee that an MPI
75       program can continue past an error.
76
77       All MPI routines (except MPI_Wtime and  MPI_Wtick  )  return  an  error
78       value;  C routines as the value of the function and Fortran routines in
79       the last argument.  The C++ bindings for MPI do not return  error  val‐
80       ues;  instead,  error values are communicated by throwing exceptions of
81       type MPI::Exception (but not by default).  Exceptions are  only  thrown
82       if the error value is not MPI::SUCCESS .
83
84
85       Note  that  if  the MPI::ERRORS_RETURN handler is set in C++, while MPI
86       functions will return upon an error, there will be no  way  to  recover
87       what the actual error value was.
88       MPI_SUCCESS
89              - No error; MPI routine completed successfully.
90       MPI_ERR_COMM
91              -  Invalid communicator.  A common error is to use a null commu‐
92              nicator in a call (not even allowed in MPI_Comm_rank ).
93       MPI_ERR_COUNT
94              - Invalid count argument.  Count arguments must be non-negative;
95              a count of zero is often valid.
96       MPI_ERR_TYPE
97              - Invalid datatype argument.  May be an uncommitted MPI_Datatype
98              (see MPI_Type_commit ).
99       MPI_ERR_TAG
100              - Invalid tag argument.  Tags must be non-negative;  tags  in  a
101              receive  ( MPI_Recv , MPI_Irecv , MPI_Sendrecv , etc.)  may also
102              be MPI_ANY_TAG .  The largest tag value is available through the
103              the attribute MPI_TAG_UB .
104
105       MPI_ERR_RANK
106              -  Invalid  source  or  destination rank.  Ranks must be between
107              zero and the size of the communicator  minus  one;  ranks  in  a
108              receive  (  MPI_Recv , MPI_Irecv , MPI_Sendrecv , etc.) may also
109              be MPI_ANY_SOURCE .
110
111       MPI_ERR_TRUNCATE
112              - Message truncated on receive.  The buffer size  specified  was
113              too small for the received message.  This is a recoverable error
114              in the LAM/MPI implementation.
115       MPI_ERR_OTHER
116              - This error is returned when some part of the LAM/MPI implemen‐
117              tation is unable to acquire memory.
118
119

SEE ALSO

121       MPI_Sendrecv
122
123

MORE INFORMATION

125       For more information, please see the official MPI Forum web site, which
126       contains the text of both the MPI-1 and MPI-2 standards.   These  docu‐
127       ments  contain  detailed  information  about each MPI function (most of
128       which is not duplicated in these man pages).
129
130       http://www.mpi-forum.org/
131
132
133

ACKNOWLEDGEMENTS

135       The LAM Team would like the thank the MPICH Team for the handy  program
136       to  generate  man  pages ("doctext" from ftp://ftp.mcs.anl.gov/pub/sow‐
137       ing/sowing.tar.gz ), the initial formatting, and some initial text  for
138       most of the MPI-1 man pages.
139

LOCATION

141       sendrecvrep.c
142
143
144
145LAM/MPI 7.1.2                      2/23/2006           MPI_Sendrecv_replace(3)
Impressum