1MPI_Exscan(3)                       LAM/MPI                      MPI_Exscan(3)
2
3
4

NAME

6       MPI_Exscan  -  Computes the exclusive scan (partial reductions) of data
7       on a collection of processes
8

SYNOPSIS

10       #include <mpi.h>
11       int MPI_Exscan(void *sbuf, void *rbuf, int count,
12                      MPI_Datatype dtype, MPI_Op op, MPI_Comm comm)
13

INPUT PARAMETERS

15       sbuf   - starting address of send buffer (choice)
16       count  - number of elements in input buffer (integer)
17       dtype  - data type of elements of input buffer (handle)
18       op     - operation (handle)
19       comm   - communicator (handle)
20
21

OUTPUT PARAMETER

23       rbuf   - starting address of receive buffer (choice).  Not  significant
24              for rank 0.
25
26              Note  thst  MPI  does  not  define this collective operation for
27              intercommunicators.  Calling this function with an intercommuni‐
28              cator will result in the MPI_ERR_COMM exception being invoked.
29
30

USAGE WITH IMPI EXTENSIONS

32       LAM/MPI  does  not yet support invoking this function on a communicator
33       that contains ranks that are non-local IMPI procs.
34
35

NOTES FOR FORTRAN

37       All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK )  have
38       an  additional  argument ierr at the end of the argument list.  ierr is
39       an integer and has the same meaning as the return value of the  routine
40       in  C.   In Fortran, MPI routines are subroutines, and are invoked with
41       the call statement.
42
43       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in
44       Fortran.
45
46

NOTES ON COLLECTIVE OPERATIONS

48       The  reduction functions ( MPI_Op ) do not return an error value.  As a
49       result, if the functions detect an error, all they  can  do  is  either
50       call  MPI_Abort  or silently skip the problem.  Thus, if you change the
51       error  handler  from  MPI_ERRORS_ARE_FATAL  to  something  else  (e.g.,
52       MPI_ERRORS_RETURN ), then no error may be indicated.
53
54       The  reason for this is the performance problems that arise in ensuring
55       that all collective routines return the same error value.
56
57

ERRORS

59       If an error occurs in an MPI function, the current MPI error handler is
60       called  to  handle  it.   By default, this error handler aborts the MPI
61       job.  The error handler may be changed with  MPI_Errhandler_set  ;  the
62       predefined  error  handler MPI_ERRORS_RETURN may be used to cause error
63       values to be returned (in C and Fortran; this  error  handler  is  less
64       useful  in  with  the  C++  MPI bindings.  The predefined error handler
65       MPI::ERRORS_THROW_EXCEPTIONS should be used in C++ if the  error  value
66       needs  to  be recovered).  Note that MPI does not guarantee that an MPI
67       program can continue past an error.
68
69       All MPI routines (except MPI_Wtime and  MPI_Wtick  )  return  an  error
70       value;  C routines as the value of the function and Fortran routines in
71       the last argument.  The C++ bindings for MPI do not return  error  val‐
72       ues;  instead,  error values are communicated by throwing exceptions of
73       type MPI::Exception (but not by default).  Exceptions are  only  thrown
74       if the error value is not MPI::SUCCESS .
75
76
77       Note  that  if  the MPI::ERRORS_RETURN handler is set in C++, while MPI
78       functions will return upon an error, there will be no  way  to  recover
79       what the actual error value was.
80       MPI_SUCCESS
81              - No error; MPI routine completed successfully.
82       MPI_ERR_COMM
83              -  Invalid communicator.  A common error is to use a null commu‐
84              nicator in a call (not even allowed in MPI_Comm_rank ).
85       MPI_ERR_OTHER
86              - A collective implementation was not able to be located at run-
87              time for this communicator.
88       MPI_ERR_OTHER
89              -  A  communicator  that  contains some non-local IMPI procs was
90              used for some function which has not yet had the IMPI extensions
91              implemented yet.  For example, most collectives on IMPI communi‐
92              cators have not been implemented yet.
93       MPI_ERR_COUNT
94              - Invalid count argument.  Count arguments must be non-negative;
95              a count of zero is often valid.
96       MPI_ERR_TYPE
97              - Invalid datatype argument.  May be an uncommitted MPI_Datatype
98              (see MPI_Type_commit ).
99       MPI_ERR_OP
100              - Invalid operation.  MPI operations (objects of type  MPI_Op  )
101              must either be one of the predefined operations (e.g., MPI_SUM )
102              or created with  MPI_Op_create  .   Additionally,  only  certain
103              datatypes  are  alloed  with  given  predefined operations.  See
104              MPI-1, section 4.9.2.
105       MPI_ERR_BUFFER
106              - Invalid buffer pointer.  Usually a null buffer  where  one  is
107              not valid.
108       MPI_ERR_BUFFER
109              -  This error class is associcated with an error code that indi‐
110              cates that two buffer arguments  are  aliased  ;  that  is,  the
111              describe  overlapping  storage  (often  the exact same storage).
112              This is prohibited in MPI (because it is prohibited by the  For‐
113              tran  standard,  and  rather than have a separate case for C and
114              Fortran, the MPI Forum adopted the more restrictive requirements
115              of Fortran).
116
117

MORE INFORMATION

119       For more information, please see the official MPI Forum web site, which
120       contains the text of both the MPI-1 and MPI-2 standards.   These  docu‐
121       ments  contain  detailed  information  about each MPI function (most of
122       which is not duplicated in these man pages).
123
124       http://www.mpi-forum.org/
125
126
127

ACKNOWLEDGEMENTS

129       The LAM Team would like the thank the MPICH Team for the handy  program
130       to  generate  man  pages ("doctext" from ftp://ftp.mcs.anl.gov/pub/sow‐
131       ing/sowing.tar.gz ), the initial formatting, and some initial text  for
132       most of the MPI-1 man pages.
133

LOCATION

135       exscan.c
136
137
138
139LAM/MPI 7.1.2                      2/23/2006                     MPI_Exscan(3)
Impressum