MPI_Scatter man page on YellowDog

Man page or keyword search:  
man Server   18644 pages
apropos Keyword Search (all sections)
Output format
YellowDog logo
[printable version]

MPI_Scatter(3OpenMPI)					 MPI_Scatter(3OpenMPI)

NAME
       MPI_Scatter - Sends data from one task to all tasks in a group.

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Scatter(void *sendbuf, int sendcount, MPI_Datatype sendtype,
	    void *recvbuf, int recvcount, MPI_Datatype recvtype, int root,
	    MPI_Comm comm)

Fortran Syntax
       INCLUDE 'mpif.h'
       MPI_SCATTER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT,
		 RECVTYPE, ROOT, COMM, IERROR)
	    <type>    SENDBUF(*), RECVBUF(*)
	    INTEGER   SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT
	    INTEGER   COMM, IERROR

C++ Syntax
       #include <mpi.h>
       void MPI::Comm::Scatter(const void* sendbuf, int sendcount,
	    const MPI::Datatype& sendtype, void* recvbuf,
	    int recvcount, const MPI::Datatype& recvtype,
	    int root) const

INPUT PARAMETERS
       sendbuf	 Address of send buffer (choice, significant only at root).

       sendcount Number of elements sent to each process (integer, significant
		 only at root).

       sendtype	 Datatype of send buffer elements (handle, significant only at
		 root).

       recvcount Number of elements in receive buffer (integer).

       recvtype	 Datatype of receive buffer elements (handle).

       root	 Rank of sending process (integer).

       comm	 Communicator (handle).

OUTPUT PARAMETERS
       recvbuf	 Address of receive buffer (choice).

       IERROR	 Fortran only: Error status (integer).

DESCRIPTION
       MPI_Scatter is the inverse operation to MPI_Gather.

       The outcome is as if the root executed n send operations,

	   MPI_Send(sendbuf + i * sendcount * extent(sendtype), sendcount,
		    sendtype, i, ...)

       and each process executed a receive,

	   MPI_Recv(recvbuf, recvcount, recvtype, i, ...).

       An  alternative	description  is	 that  the  root  sends a message with
       MPI_Send(sendbuf, sendcount * n, sendtype, ...). This message is	 split
       into  n	equal  segments, the ith segment is sent to the ith process in
       the group, and each process receives this message as above.

       The send buffer is ignored for all nonroot processes.

       The type signature associated with sendcount, sendtype at the root must
       be  equal  to the type signature associated with recvcount, recvtype at
       all processes (however, the type maps may be different).	 This  implies
       that  the  amount  of  data  sent  must	be equal to the amount of data
       received, pairwise between each process and  the	 root.	Distinct  type
       maps between sender and receiver are still allowed.

       All arguments to the function are significant on process root, while on
       other processes, only arguments	recvbuf,  recvcount,  recvtype,	 root,
       comm  are  significant. The arguments root and comm must have identical
       values on all processes.

       The specification of counts and types should not cause any location  on
       the root to be read more than once.

       Rationale:  Though not needed, the last restriction is imposed so as to
       achieve symmetry with MPI_Gather, where the  corresponding  restriction
       (a multiple-write restriction) is necessary.

       Example:	 The  reverse  of Example 1 in the MPI_Gather manpage. Scatter
       sets of 100 ints from the root to each process in the group.

	       MPI_Comm comm;
	       int gsize,*sendbuf;
	       int root, rbuf[100];
	       ...
	       MPI_Comm_size(comm, &gsize);
	       sendbuf = (int *)malloc(gsize*100*sizeof(int));
	       ...
	       MPI_Scatter(sendbuf, 100, MPI_INT, rbuf, 100,
			   MPI_INT, root, comm);

USE OF IN-PLACE OPTION
       When the communicator is an intracommunicator, you can perform a gather
       operation  in-place  (the  output  buffer is used as the input buffer).
       Use the variable MPI_IN_PLACE as the value of the root process recvbuf.
       In  this case, recvcount and recvtype are ignored, and the root process
       sends no data to itself.

       Note that MPI_IN_PLACE is a special kind of  value;  it	has  the  same
       restrictions on its use as MPI_BOTTOM.

       Because	the  in-place  option converts the receive buffer into a send-
       and-receive buffer, a Fortran binding that includes  INTENT  must  mark
       these as INOUT, not OUT.

WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
       When the communicator is an inter-communicator, the root process in the
       first group sends data to all processes in the second group.  The first
       group  defines  the  root  process.   That process uses MPI_ROOT as the
       value of its root argument.  The remaining processes use	 MPI_PROC_NULL
       as the value of their root argument.  All processes in the second group
       use the rank of that root process in the first group as	the  value  of
       their  root argument.   The receive buffer argument of the root process
       in the first group must be consistent with the receive buffer  argument
       of the processes in the second group.

ERRORS
       Almost  all MPI routines return an error value; C routines as the value
       of the function and Fortran routines in the last	 argument.  C++	 func‐
       tions  do  not  return  errors.	If the default error handler is set to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI:Exception object.

       Before  the  error  value is returned, the current MPI error handler is
       called. By default, this error handler aborts the MPI job,  except  for
       I/O   function	errors.	  The	error  handler	may  be	 changed  with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may  be	used  to cause error values to be returned. Note that MPI does
       not guarantee that an MPI program can continue past an error.

SEE ALSO
       MPI_Scatterv
       MPI_Gather
       MPI_Gatherv

Open MPI 1.2			September 2006		 MPI_Scatter(3OpenMPI)
[top]

List of man pages available for YellowDog

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net