1MPI_Reduce_scatter(3)               LAM/MPI              MPI_Reduce_scatter(3)
2
3
4

NAME

6       MPI_Reduce_scatter -  Combines values and scatters the results
7

SYNOPSIS

9       #include <mpi.h>
10       int MPI_Reduce_scatter(void *sbuf, void *rbuf, int *rcounts,
11                             MPI_Datatype dtype, MPI_Op op, MPI_Comm comm)
12

INPUT PARAMETERS

14       sbuf   - starting address of send buffer (choice)
15       rcounts
16              - integer array specifying the number of elements in result dis‐
17              tributed to each process.  Array must be identical on all  call‐
18              ing processes.
19       dtype  - data type of elements of input buffer (handle)
20       op     - operation (handle)
21       comm   - communicator (handle)
22
23

OUTPUT PARAMETER

25       rbuf   - starting address of receive buffer (choice)
26
27

USAGE WITH IMPI EXTENSIONS

29       LAM/MPI  does  not yet support invoking this function on a communicator
30       that contains ranks that are non-local IMPI procs.
31
32

NOTES FOR FORTRAN

34       All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK )  have
35       an  additional  argument ierr at the end of the argument list.  ierr is
36       an integer and has the same meaning as the return value of the  routine
37       in  C.   In Fortran, MPI routines are subroutines, and are invoked with
38       the call statement.
39
40       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in
41       Fortran.
42
43

NOTES ON COLLECTIVE OPERATIONS

45       The  reduction functions ( MPI_Op ) do not return an error value.  As a
46       result, if the functions detect an error, all they  can  do  is  either
47       call  MPI_Abort  or silently skip the problem.  Thus, if you change the
48       error  handler  from  MPI_ERRORS_ARE_FATAL  to  something  else  (e.g.,
49       MPI_ERRORS_RETURN ), then no error may be indicated.
50
51       The  reason for this is the performance problems that arise in ensuring
52       that all collective routines return the same error value.
53
54

ERRORS

56       If an error occurs in an MPI function, the current MPI error handler is
57       called  to  handle  it.   By default, this error handler aborts the MPI
58       job.  The error handler may be changed with  MPI_Errhandler_set  ;  the
59       predefined  error  handler MPI_ERRORS_RETURN may be used to cause error
60       values to be returned (in C and Fortran; this  error  handler  is  less
61       useful  in  with  the  C++  MPI bindings.  The predefined error handler
62       MPI::ERRORS_THROW_EXCEPTIONS should be used in C++ if the  error  value
63       needs  to  be recovered).  Note that MPI does not guarantee that an MPI
64       program can continue past an error.
65
66       All MPI routines (except MPI_Wtime and  MPI_Wtick  )  return  an  error
67       value;  C routines as the value of the function and Fortran routines in
68       the last argument.  The C++ bindings for MPI do not return  error  val‐
69       ues;  instead,  error values are communicated by throwing exceptions of
70       type MPI::Exception (but not by default).  Exceptions are  only  thrown
71       if the error value is not MPI::SUCCESS .
72
73
74       Note  that  if  the MPI::ERRORS_RETURN handler is set in C++, while MPI
75       functions will return upon an error, there will be no  way  to  recover
76       what the actual error value was.
77       MPI_SUCCESS
78              - No error; MPI routine completed successfully.
79       MPI_ERR_COMM
80              -  Invalid communicator.  A common error is to use a null commu‐
81              nicator in a call (not even allowed in MPI_Comm_rank ).
82       MPI_ERR_OTHER
83              - A collective implementation was not able to be located at run-
84              time for this communicator.
85       MPI_ERR_OTHER
86              -  A  communicator  that  contains some non-local IMPI procs was
87              used for some function which has not yet had the IMPI extensions
88              implemented yet.  For example, most collectives on IMPI communi‐
89              cators have not been implemented yet.
90       MPI_ERR_COUNT
91              - Invalid count argument.  Count arguments must be non-negative;
92              a count of zero is often valid.
93       MPI_ERR_TYPE
94              - Invalid datatype argument.  May be an uncommitted MPI_Datatype
95              (see MPI_Type_commit ).
96       MPI_ERR_BUFFER
97              - Invalid buffer pointer.  Usually a null buffer  where  one  is
98              not valid.
99       MPI_ERR_OP
100              -  Invalid  operation.  MPI operations (objects of type MPI_Op )
101              must either be one of the predefined operations (e.g., MPI_SUM )
102              or  created  with  MPI_Op_create  .   Additionally, only certain
103              datatypes are alloed  with  given  predefined  operations.   See
104              MPI-1, section 4.9.2.
105       MPI_ERR_BUFFER
106              -  Invalid  buffer  pointer.  Usually a null buffer where one is
107              not valid.
108       MPI_ERR_BUFFER
109              - This error class is associcated with an error code that  indi‐
110              cates  that  two  buffer  arguments  are  aliased ; that is, the
111              describe overlapping storage (often  the  exact  same  storage).
112              This  is prohibited in MPI (because it is prohibited by the For‐
113              tran standard, and rather than have a separate case  for  C  and
114              Fortran, the MPI Forum adopted the more restrictive requirements
115              of Fortran).
116
117

MORE INFORMATION

119       For more information, please see the official MPI Forum web site, which
120       contains  the  text of both the MPI-1 and MPI-2 standards.  These docu‐
121       ments contain detailed information about each  MPI  function  (most  of
122       which is not duplicated in these man pages).
123
124       http://www.mpi-forum.org/
125
126
127

ACKNOWLEDGEMENTS

129       The  LAM Team would like the thank the MPICH Team for the handy program
130       to generate man pages  ("doctext"  from  ftp://ftp.mcs.anl.gov/pub/sow‐
131       ing/sowing.tar.gz  ), the initial formatting, and some initial text for
132       most of the MPI-1 man pages.
133

LOCATION

135       reducescatter.c
136
137
138
139LAM/MPI 7.1.2                      2/23/2006             MPI_Reduce_scatter(3)
Impressum