1MPI_Reduce(3)                       LAM/MPI                      MPI_Reduce(3)
2
3
4

NAME

6       MPI_Reduce -  Reduces values on all processes to a single value
7

SYNOPSIS

9       #include <mpi.h>
10       int MPI_Reduce(void *sbuf, void* rbuf, int count,
11                      MPI_Datatype dtype, MPI_Op op, int root,
12                      MPI_Comm comm)
13

INPUT PARAMETERS

15       sbuf   - address of send buffer (choice)
16       count  - number of elements in send buffer (integer)
17       dtype  - data type of elements of send buffer (handle)
18       op     - reduce operation (handle)
19       root   - rank of root process (integer)
20       comm   - communicator (handle)
21
22

OUTPUT PARAMETER

24       rbuf   - address of receive buffer (choice, significant only at root )
25
26

USAGE WITH IMPI EXTENSIONS

28       This  function has had the IMPI extensions implemented.  It is legal to
29       call this function on IMPI communicators.
30
31

ALGORITHM

33       If there are 4 or less ranks involved, the root  loops  over  receiving
34       from each rank, and then performs the final reduction locally.
35
36       If there are more than 4 ranks involved, a tree-based algorithm is used
37       to collate the reduced data at the root (the data is  reduced  at  each
38       parent  in  the tree so that the reduction operations are actaully dis‐
39       tributed).
40
41

NOTES FOR FORTRAN

43       All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK )  have
44       an  additional  argument ierr at the end of the argument list.  ierr is
45       an integer and has the same meaning as the return value of the  routine
46       in  C.   In Fortran, MPI routines are subroutines, and are invoked with
47       the call statement.
48
49       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in
50       Fortran.
51
52

NOTES ON COLLECTIVE OPERATIONS

54       The  reduction functions ( MPI_Op ) do not return an error value.  As a
55       result, if the functions detect an error, all they  can  do  is  either
56       call  MPI_Abort  or silently skip the problem.  Thus, if you change the
57       error  handler  from  MPI_ERRORS_ARE_FATAL  to  something  else  (e.g.,
58       MPI_ERRORS_RETURN ), then no error may be indicated.
59
60       The  reason for this is the performance problems that arise in ensuring
61       that all collective routines return the same error value.
62
63

ERRORS

65       If an error occurs in an MPI function, the current MPI error handler is
66       called  to  handle  it.   By default, this error handler aborts the MPI
67       job.  The error handler may be changed with  MPI_Errhandler_set  ;  the
68       predefined  error  handler MPI_ERRORS_RETURN may be used to cause error
69       values to be returned (in C and Fortran; this  error  handler  is  less
70       useful  in  with  the  C++  MPI bindings.  The predefined error handler
71       MPI::ERRORS_THROW_EXCEPTIONS should be used in C++ if the  error  value
72       needs  to  be recovered).  Note that MPI does not guarantee that an MPI
73       program can continue past an error.
74
75       All MPI routines (except MPI_Wtime and  MPI_Wtick  )  return  an  error
76       value;  C routines as the value of the function and Fortran routines in
77       the last argument.  The C++ bindings for MPI do not return  error  val‐
78       ues;  instead,  error values are communicated by throwing exceptions of
79       type MPI::Exception (but not by default).  Exceptions are  only  thrown
80       if the error value is not MPI::SUCCESS .
81
82
83       Note  that  if  the MPI::ERRORS_RETURN handler is set in C++, while MPI
84       functions will return upon an error, there will be no  way  to  recover
85       what the actual error value was.
86       MPI_SUCCESS
87              - No error; MPI routine completed successfully.
88       MPI_ERR_COMM
89              -  Invalid communicator.  A common error is to use a null commu‐
90              nicator in a call (not even allowed in MPI_Comm_rank ).
91       MPI_ERR_OTHER
92              - A collective implementation was not able to be located at run-
93              time for this communicator.
94       MPI_ERR_COUNT
95              - Invalid count argument.  Count arguments must be non-negative;
96              a count of zero is often valid.
97       MPI_ERR_TYPE
98              - Invalid datatype argument.  May be an uncommitted MPI_Datatype
99              (see MPI_Type_commit ).
100       MPI_ERR_BUFFER
101              -  Invalid  buffer  pointer.  Usually a null buffer where one is
102              not valid.
103       MPI_ERR_BUFFER
104              - This error class is associcated with an error code that  indi‐
105              cates  that  two  buffer  arguments  are  aliased ; that is, the
106              describe overlapping storage (often  the  exact  same  storage).
107              This  is prohibited in MPI (because it is prohibited by the For‐
108              tran standard, and rather than have a separate case  for  C  and
109              Fortran, the MPI Forum adopted the more restrictive requirements
110              of Fortran).
111       MPI_ERR_ROOT
112              - Invalid root.  The root must be specified as  a  rank  in  the
113              communicator.   Ranks  must  be between zero and the size of the
114              communicator minus one.
115
116

MORE INFORMATION

118       For more information, please see the official MPI Forum web site, which
119       contains  the  text of both the MPI-1 and MPI-2 standards.  These docu‐
120       ments contain detailed information about each  MPI  function  (most  of
121       which is not duplicated in these man pages).
122
123       http://www.mpi-forum.org/
124
125
126

ACKNOWLEDGEMENTS

128       The  LAM Team would like the thank the MPICH Team for the handy program
129       to generate man pages  ("doctext"  from  ftp://ftp.mcs.anl.gov/pub/sow‐
130       ing/sowing.tar.gz  ), the initial formatting, and some initial text for
131       most of the MPI-1 man pages.
132

LOCATION

134       reduce.c
135
136
137
138LAM/MPI 7.1.2                      2/23/2006                     MPI_Reduce(3)
Impressum