MPI_Exscan man page on YellowDog

Man page or keyword search:  
man Server   18644 pages
apropos Keyword Search (all sections)
Output format
YellowDog logo
[printable version]

MPI_Exscan(3OpenMPI)					  MPI_Exscan(3OpenMPI)

NAME
       MPI_Exscan - Computes an exclusive scan (partial reduction)

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Exscan(void *sendbuf, void *recvbuf, int count,
	    MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)

Fortran Syntax
       INCLUDE 'mpif.h'
       MPI_SCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR)
	    <type>    SENDBUF(*), RECVBUF(*)
	    INTEGER   COUNT, DATATYPE, OP, COMM, IERROR

C++ Syntax
       #include <mpi.h>
       void MPI::Intracomm::Exscan(const void* sendbuf, void* recvbuf,
	    int count, const MPI::Datatype& datatype,
	    const MPI::Op& op) const

INPUT PARAMETERS
       sendbuf	 Send buffer (choice).

       count	 Number of elements in input buffer (integer).

       datatype	 Data type of elements of input buffer (handle).

       op	 Operation (handle).

       comm	 Communicator (handle).

OUTPUT PARAMETERS
       recvbuf	 Receive buffer (choice).

       IERROR	 Fortran only: Error status (integer).

DESCRIPTION
       MPI_Exscan  is  used  to	 perform an exclusive prefix reduction on data
       distributed across the calling processes. The operation returns, in the
       recvbuf of the process with rank i, the reduction (calculated according
       to the function op) of the values in the	 sendbufs  of  processes  with
       ranks  0,  ...,	i-1.  Compare this with the functionality of MPI_Scan,
       which calculates over the range 0, ...,	i  (inclusive).	 The  type  of
       operations  supported, their semantics, and the constraints on send and
       receive buffers are as for MPI_Reduce.

       The value in recvbuf on	process	 0  is	undefined  and	unreliable  as
       recvbuf	is  not	 significant  for  process  0. The value of recvbuf on
       process 1 is always the value in sendbuf on process 0.

       No MPI_IN_PLACE operation is supported.

NOTES
       MPI does not specify which process computes which operation. In partic‐
       ular,  both  processes  0 and 1 may participate in the computation even
       though the results for both processes' recvbuf are  degenerate.	There‐
       fore, all processes, including 0 and 1, must provide the same op.

       It  can be argued, from a mathematical perspective, that the definition
       of MPI_Exscan is unsatisfactory because the  output  at	process	 0  is
       undefined.   The "mathematically correct" output for process 0 would be
       the unit element of the reduction operation. However, such a definition
       of  an  exclusive scan would not work with user-defined op functions as
       there is no way for MPI to "know" the unit value for these custom oper‐
       ations.

NOTES ON COLLECTIVE OPERATIONS
       The  reduction  functions  of type MPI_Op do not return an error value.
       As a result, if the functions detect an	error,	all  they  can	do  is
       either  call MPI_Abort or silently skip the problem. Thus, if the error
       handler is changed from MPI_ERRORS_ARE_FATAL to something  else	(e.g.,
       MPI_ERRORS_RETURN), then no error may be indicated.

       The  reason  for	 this is the performance problems in ensuring that all
       collective routines return the same error value.

ERRORS
       Almost all MPI routines return an error value; C routines as the	 value
       of  the	function  and Fortran routines in the last argument. C++ func‐
       tions do not return errors. If the default  error  handler  is  set  to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI:Exception object.

       Before the error value is returned, the current MPI  error  handler  is
       called.	By  default, this error handler aborts the MPI job, except for
       I/O  function  errors.  The  error  handler   may   be	changed	  with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may be used to cause error values to be returned. Note  that  MPI  does
       not guarantee that an MPI program can continue past an error.

       See the MPI man page for a full list of MPI error codes.

SEE ALSO
       MPI_Op_create
       MPI_Reduce
       MPI_Scan

Open MPI 1.2			September 2006		  MPI_Exscan(3OpenMPI)
[top]

List of man pages available for YellowDog

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net