MPI_Allgather man page on YellowDog

Man page or keyword search:  
man Server   18644 pages
apropos Keyword Search (all sections)
Output format
YellowDog logo
[printable version]

MPI_Allgather(3OpenMPI)				       MPI_Allgather(3OpenMPI)

NAME
       MPI_Allgather  -	 Gathers data from all processes and distributes it to
       all processes

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Allgather(void *sendbuf, int  sendcount,
	    MPI_Datatype sendtype, void *recvbuf, int recvcount,
	     MPI_Datatype recvtype, MPI_Comm comm)

Fortran Syntax
       INCLUDE 'mpif.h'
       MPI_ALLGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT,
		 RECVTYPE, COMM, IERROR)
	    <type>    SENDBUF (*), RECVBUF (*)
	    INTEGER   SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM,
	    INTEGER   IERROR

C++ Syntax
       #include <mpi.h>
       void MPI::Comm::Allgather(const void* sendbuf, int sendcount, const
	    MPI::Datatype& sendtype, void* recvbuf, int recvcount,
	    const MPI::Datatype& recvtype) const = 0

INPUT PARAMETERS
       sendbuf	 Starting address of send buffer (choice).

       sendcount Number of elements in send buffer (integer).

       sendtype	 Datatype of send buffer elements (handle).

       recvcount Number of elements received from any process (integer).

       recvtype	 Datatype of receive buffer elements (handle).

       comm	 Communicator (handle).

OUTPUT PARAMETERS
       recvbuf	 Address of receive buffer (choice).

       IERROR	 Fortran only: Error status (integer).

DESCRIPTION
       MPI_Allgather is similar	 to  MPI_Gather,  except  that	all  processes
       receive	the result, instead of just the root. In other words, all pro‐
       cesses contribute to the result, and all processes receive the result.

       The type signature associated with sendcount,  sendtype	at  a  process
       must be equal to the type signature associated with recvcount, recvtype
       at any other process.

       The outcome of a call to MPI_Allgather(...) is as if all processes exe‐
       cuted n calls to

	 MPI_Gather(sendbuf,sendcount,sendtype,recvbuf,recvcount,
		    recvtype,root,comm),

       for  root  = 0 , ..., n-1. The rules for correct usage of MPI_Allgather
       are easily found from the corresponding rules for MPI_Gather.

       Example: The all-gather version	of  Example  1	in  MPI_Gather.	 Using
       MPI_Allgather,  we will gather 100 ints from every process in the group
       to every process.

       MPI_Comm comm;
	   int gsize,sendarray[100];
	   int *rbuf;
	   ...
	   MPI_Comm_size( comm, &gsize);
	   rbuf = (int *)malloc(gsize*100*sizeof(int));
	   MPI_Allgather( sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, comm);

       After the call, every process has the group-wide concatenation  of  the
       sets of data.

USE OF IN-PLACE OPTION
       When  the communicator is an intracommunicator, you can perform an all-
       gather operation in-place (the output buffer is used as the input  buf‐
       fer).   Use  the variable MPI_IN_PLACE as the value of both sendbuf and
       recvbuf.	 In this case, sendcount and sendtype are ignored.  The	 input
       data  of	 each  process is assumed to be in the area where that process
       would receive its own contribution to  the  receive  buffer.   Specifi‐
       cally,  the  outcome  of a call to MPI_Allgather that used the in-place
       option is identical to the case in which all processes executed n calls
       to

	  MPI_GATHER ( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, recvbuf,
	  recvcount, recvtype, root, comm )

       for root =0, ... , n-1.

       Note  that  MPI_IN_PLACE	 is  a	special kind of value; it has the same
       restrictions on its use as MPI_BOTTOM.

       Because the in-place option converts the receive buffer	into  a	 send-
       and-receive  buffer,  a	Fortran binding that includes INTENT must mark
       these as INOUT, not OUT.

WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
       When the communicator is an inter-communicator,	the  gather  operation
       occurs in two phases.  The data is gathered from all the members of the
       first group and received by all the members of the second group.	  Then
       the  data  is  gathered	from  all  the members of the second group and
       received by all the members of the first.  The operation, however, need
       not  be	symmetric.  The number of items sent by the processes in first
       group need not be equal to the number of items sent  by	the  the  pro‐
       cesses in the second group.  You can move data in only one direction by
       giving sendcount a value of 0 for communication in the  reverse	direc‐
       tion.

       The  first  group  defines  the	root  process.	 The root process uses
       MPI_ROOT as the value of root.  All other processes in the first	 group
       use  MPI_PROC_NULL  as  the value of root.  All processes in the second
       group use the rank of the root process in the first group as the	 value
       of root.

       When  the  communicator	is an intra-communicator, these groups are the
       same, and the operation occurs in a single phase.

ERRORS
       Almost all MPI routines return an error value; C routines as the	 value
       of  the	function  and Fortran routines in the last argument. C++ func‐
       tions do not return errors. If the default  error  handler  is  set  to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI:Exception object.

       Before the error value is returned, the current MPI  error  handler  is
       called.	By  default, this error handler aborts the MPI job, except for
       I/O  function  errors.  The  error  handler   may   be	changed	  with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may be used to cause error values to be returned. Note  that  MPI  does
       not guarantee that an MPI program can continue past an error.

SEE ALSO
       MPI_Allgatherv
       MPI_Gather

Open MPI 1.2			September 2006	       MPI_Allgather(3OpenMPI)
[top]

List of man pages available for YellowDog

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net