1MPI_Scan(3)                         LAM/MPI                        MPI_Scan(3)
2
3
4

NAME

6       MPI_Scan -  Computes the scan (partial reductions) of data on a collec‐
7       tion of processes
8

SYNOPSIS

10       #include <mpi.h>
11       int MPI_Scan(void *sbuf, void *rbuf, int count,
12                    MPI_Datatype dtype, MPI_Op op, MPI_Comm comm)
13

INPUT PARAMETERS

15       sbuf   - starting address of send buffer (choice)
16       count  - number of elements in input buffer (integer)
17       dtype  - data type of elements of input buffer (handle)
18       op     - operation (handle)
19       comm   - communicator (handle)
20
21

OUTPUT PARAMETER

23       rbuf   - starting address of receive buffer (choice)
24
25

USAGE WITH IMPI EXTENSIONS

27       LAM/MPI does not yet support invoking this function on  a  communicator
28       that contains ranks that are non-local IMPI procs.
29
30

NOTES FOR FORTRAN

32       All  MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK ) have
33       an additional argument ierr at the end of the argument list.   ierr  is
34       an  integer and has the same meaning as the return value of the routine
35       in C.  In Fortran, MPI routines are subroutines, and are  invoked  with
36       the call statement.
37
38       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in
39       Fortran.
40
41

NOTES ON COLLECTIVE OPERATIONS

43       The reduction functions ( MPI_Op ) do not return an error value.  As  a
44       result,  if  the  functions  detect an error, all they can do is either
45       call MPI_Abort or silently skip the problem.  Thus, if you  change  the
46       error  handler  from  MPI_ERRORS_ARE_FATAL  to  something  else  (e.g.,
47       MPI_ERRORS_RETURN ), then no error may be indicated.
48
49       The reason for this is the performance problems that arise in  ensuring
50       that all collective routines return the same error value.
51
52

ERRORS

54       If an error occurs in an MPI function, the current MPI error handler is
55       called to handle it.  By default, this error  handler  aborts  the  MPI
56       job.   The  error  handler may be changed with MPI_Errhandler_set ; the
57       predefined error handler MPI_ERRORS_RETURN may be used to  cause  error
58       values  to  be  returned  (in C and Fortran; this error handler is less
59       useful in with the C++ MPI  bindings.   The  predefined  error  handler
60       MPI::ERRORS_THROW_EXCEPTIONS  should  be used in C++ if the error value
61       needs to be recovered).  Note that MPI does not guarantee that  an  MPI
62       program can continue past an error.
63
64       All  MPI  routines  (except  MPI_Wtime  and MPI_Wtick ) return an error
65       value; C routines as the value of the function and Fortran routines  in
66       the  last  argument.  The C++ bindings for MPI do not return error val‐
67       ues; instead, error values are communicated by throwing  exceptions  of
68       type  MPI::Exception  (but not by default).  Exceptions are only thrown
69       if the error value is not MPI::SUCCESS .
70
71
72       Note that if the MPI::ERRORS_RETURN handler is set in  C++,  while  MPI
73       functions  will  return  upon an error, there will be no way to recover
74       what the actual error value was.
75       MPI_SUCCESS
76              - No error; MPI routine completed successfully.
77       MPI_ERR_COMM
78              - Invalid communicator.  A common error is to use a null  commu‐
79              nicator in a call (not even allowed in MPI_Comm_rank ).
80       MPI_ERR_OTHER
81              - A collective implementation was not able to be located at run-
82              time for this communicator.
83       MPI_ERR_OTHER
84              - A communicator that contains some  non-local  IMPI  procs  was
85              used for some function which has not yet had the IMPI extensions
86              implemented yet.  For example, most collectives on IMPI communi‐
87              cators have not been implemented yet.
88       MPI_ERR_COUNT
89              - Invalid count argument.  Count arguments must be non-negative;
90              a count of zero is often valid.
91       MPI_ERR_TYPE
92              - Invalid datatype argument.  May be an uncommitted MPI_Datatype
93              (see MPI_Type_commit ).
94       MPI_ERR_OP
95              -  Invalid  operation.  MPI operations (objects of type MPI_Op )
96              must either be one of the predefined operations (e.g., MPI_SUM )
97              or  created  with  MPI_Op_create  .   Additionally, only certain
98              datatypes are alloed  with  given  predefined  operations.   See
99              MPI-1, section 4.9.2.
100       MPI_ERR_BUFFER
101              -  Invalid  buffer  pointer.  Usually a null buffer where one is
102              not valid.
103       MPI_ERR_BUFFER
104              - This error class is associcated with an error code that  indi‐
105              cates  that  two  buffer  arguments  are  aliased ; that is, the
106              describe overlapping storage (often  the  exact  same  storage).
107              This  is prohibited in MPI (because it is prohibited by the For‐
108              tran standard, and rather than have a separate case  for  C  and
109              Fortran, the MPI Forum adopted the more restrictive requirements
110              of Fortran).
111
112

MORE INFORMATION

114       For more information, please see the official MPI Forum web site, which
115       contains  the  text of both the MPI-1 and MPI-2 standards.  These docu‐
116       ments contain detailed information about each  MPI  function  (most  of
117       which is not duplicated in these man pages).
118
119       http://www.mpi-forum.org/
120
121
122

ACKNOWLEDGEMENTS

124       The  LAM Team would like the thank the MPICH Team for the handy program
125       to generate man pages  ("doctext"  from  ftp://ftp.mcs.anl.gov/pub/sow‐
126       ing/sowing.tar.gz  ), the initial formatting, and some initial text for
127       most of the MPI-1 man pages.
128

LOCATION

130       scan.c
131
132
133
134LAM/MPI 7.1.2                      2/23/2006                       MPI_Scan(3)
Impressum