MPI_Allgatherv man page on YellowDog

Man page or keyword search:  
man Server   18644 pages
apropos Keyword Search (all sections)
Output format
YellowDog logo
[printable version]

MPI_Allgatherv(3OpenMPI)			      MPI_Allgatherv(3OpenMPI)

NAME
       MPI_Allgatherv  -  Gathers  data	 from all processes and delivers it to
       all. Each process may contribute a different amount of data.

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Allgatherv(void *sendbuf, int sendcount,
	    MPI_Datatype sendtype, void *recvbuf, int *recvcount,
	    int *displs, MPI_Datatype recvtype, MPI_Comm comm)

Fortran Syntax
       INCLUDE 'mpif.h'
       MPI_ALLGATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF,
		 RECVCOUNT, DISPLS, RECVTYPE, COMM, IERROR)
	    <type>    SENDBUF(*), RECVBUF(*)
	    INTEGER   SENDCOUNT, SENDTYPE, RECVCOUNT(*),
	    INTEGER   DISPLS(*), RECVTYPE, COMM, IERROR

C++ Syntax
       #include <mpi.h>
       void MPI::Comm::Allgatherv(const void* sendbuf, int sendcount,
	    const MPI::Datatype& sendtype, void* recvbuf,
	    const int recvcounts[], const int displs[],
	    const MPI::Datatype& recvtype) const = 0

INPUT PARAMETERS
       sendbuf	 Starting address of send buffer (choice).

       sendcount Number of elements in send buffer (integer).

       sendtype	 Datatype of send buffer elements (handle).

       recvcount Integer array (of length group size) containing the number of
		 elements that are received from each process.

       displs	 Integer  array	 (of length group size). Entry i specifies the
		 displacement (relative to recvbuf)  at	 which	to  place  the
		 incoming data from process i.

       recvtype	 Datatype of receive buffer elements (handle).

       comm	 Communicator (handle).

OUTPUT PARAMETERS
       recvbuf	 Address of receive buffer (choice).

       IERROR	 Fortran only: Error status (integer).

DESCRIPTION
       MPI_Allgatherv is similar to MPI_Allgather in that all processes gather
       data from all other processes, except that each process can send a dif‐
       ferent  amount  of data. The block of data sent from the jth process is
       received by every process and placed in the jth	block  of  the	buffer
       recvbuf.

       The  type  signature  associated with sendcount, sendtype, at process j
       must be equal to the  type  signature  associated  with	recvcounts[j],
       recvtype at any other process.

       The outcome is as if all processes executed calls to
       MPI_Gatherv(sendbuf,sendcount,sendtype,recvbuf,recvcount,
		   displs,recvtype,root,comm)

       for  root = 0 , ..., n-1. The rules for correct usage of MPI_Allgatherv
       are easily found from the corresponding rules for MPI_Gatherv.

USE OF IN-PLACE OPTION
       When the communicator is an intracommunicator, you can perform an  all-
       gather  operation in-place (the output buffer is used as the input buf‐
       fer).  Use the variable MPI_IN_PLACE as the value of both  sendbuf  and
       recvbuf.	  In this case, sendcount and sendtype are ignored.  The input
       data of each process is assumed to be in the area  where	 that  process
       would  receive  its  own	 contribution to the receive buffer.  Specifi‐
       cally, the outcome of a call to MPI_Allgather that  used	 the  in-place
       option is identical to the case in which all processes executed n calls
       to

	  MPI_GATHERV ( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, recvbuf,
	  recvcounts, displs, recvtype, root, comm )

       for root =0, ... , n-1.

       Note that MPI_IN_PLACE is a special kind of  value;  it	has  the  same
       restrictions on its use as MPI_BOTTOM.

       Because	the  in-place  option converts the receive buffer into a send-
       and-receive buffer, a Fortran binding that includes  INTENT  must  mark
       these as INOUT, not OUT.

WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
       When  the  communicator	is an inter-communicator, the gather operation
       occurs in two phases.  The data is gathered from all the members of the
       first  group, concatenated, and received by all the members of the sec‐
       ond group.  Then the data is gathered from all the members of the  sec‐
       ond  group, concatenated, and received by all the members of the first.
       The send buffer arguments in the one group must be consistent with  the
       receive buffer arguments in the other group, and vice versa.  The oper‐
       ation must exhibit symmetric, full-duplex behavior.

       The first group defines	the  root  process.   The  root	 process  uses
       MPI_ROOT	 as the value of root.	All other processes in the first group
       use MPI_PROC_NULL as the value of root.	All processes  in  the	second
       group  use the rank of the root process in the first group as the value
       of root.

       When the communicator is an intra-communicator, these  groups  are  the
       same, and the operation occurs in a single phase.

ERRORS
       Almost  all MPI routines return an error value; C routines as the value
       of the function and Fortran routines in the last	 argument.  C++	 func‐
       tions  do  not  return  errors.	If the default error handler is set to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI:Exception object.

       Before  the  error  value is returned, the current MPI error handler is
       called. By default, this error handler aborts the MPI job,  except  for
       I/O   function	errors.	  The	error  handler	may  be	 changed  with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may  be	used  to cause error values to be returned. Note that MPI does
       not guarantee that an MPI program can continue past an error.

SEE ALSO
       MPI_Gatherv
       MPI_Allgather

Open MPI 1.2			  March 2007	      MPI_Allgatherv(3OpenMPI)
[top]

List of man pages available for YellowDog

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net