MPI_Get_accumulate man page on DragonFly

Man page or keyword search:  
man Server   44335 pages
apropos Keyword Search (all sections)
Output format
DragonFly logo
[printable version]

MPI_Get_accumulate(3)		      MPI		 MPI_Get_accumulate(3)

NAME
       MPI_Get_accumulate  -  Perform an atomic, one-sided read-and-accumulate
       operation.

SYNOPSIS
       int MPI_Get_accumulate(const void *origin_addr, int origin_count,
	       MPI_Datatype origin_datatype, void *result_addr, int result_count,
	       MPI_Datatype result_datatype, int target_rank, MPI_Aint target_disp,
	       int target_count, MPI_Datatype target_datatype, MPI_Op op, MPI_Win win)

       Accumulate origin_count elements of type origin_datatype from the  ori‐
       gin  buffer  (origin_addr)  to the buffer at offset target_disp, in the
       target window specified by target_rank and win, using the operation  op
       and  return  in the result buffer result_addr the content of the target
       buffer before the accumulation.

INPUT PARAMETERS
       origin_addr
	      - initial address of buffer (choice)
       origin_count
	      - number of entries in buffer (nonnegative integer)
       origin_datatype
	      - datatype of each buffer entry (handle)
       result_addr
	      - initial address of result buffer (choice)
       result_count
	      - number of entries in result buffer (non-negative integer)
       result_datatype
	      - datatype of each entry in result buffer (handle)
       target_rank
	      - rank of target (nonnegative integer)
       target_disp
	      - displacement from start of window to beginning of target  buf‐
	      fer (nonnegative integer)
       target_count
	      - number of entries in target buffer (nonnegative integer)
       target_datatype
	      - datatype of each entry in target buffer (handle)
       op     - predefined reduce operation (handle)
       win    - window object (handle)

NOTES
       This  operations	 is  atomic  with respect to other "accumulate" opera‐
       tions.

       The get and accumulate steps are executed  atomically  for  each	 basic
       element	in  the	 datatype  (see MPI 3.0 Section 11.7 for details). The
       predefined operation MPI_REPLACE provides fetch-and-set behavior.

       The origin and result buffers (origin_addr  and	result_addr)  must  be
       disjoint.   Each	 datatype  argument must be a predefined datatype or a
       derived datatype where all basic components are of the same  predefined
       datatype. All datatype arguments must be constructed from the same pre‐
       defined datatype. The operation op applies to elements of  that	prede‐
       fined  type.  target_datatype must not specify overlapping entries, and
       the target buffer must fit in the target window or in  attached	memory
       in a dynamic window.

       Any  of the predefined operations for MPI_Reduce , as well as MPI_NO_OP
       or MPI_REPLACE can be specified as op. User-defined functions cannot be
       used.  A	 new  predefined  operation, MPI_NO_OP , is defined. It corre‐
       sponds to the associative function f (a, b)  =  a;  i.e.,  the  current
       value in the target memory is returned in the result buffer at the ori‐
       gin and no operation is performed on the target buffer.	MPI_NO_OP  can
       be   used  only	in  MPI_Get_accumulate	,  MPI_Rget_accumulate	,  and
       MPI_Fetch_and_op .   MPI_NO_OP  cannot  be  used	 in  MPI_Accumulate  ,
       MPI_Raccumulate	 ,   or	  collective  reduction	 operations,  such  as
       MPI_Reduce and others.

NOTES FOR FORTRAN
       All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK )  have
       an  additional  argument ierr at the end of the argument list.  ierr is
       an integer and has the same meaning as the return value of the  routine
       in  C.	In Fortran, MPI routines are subroutines, and are invoked with
       the call statement.

       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in
       Fortran.

ERRORS
       All  MPI	 routines  (except  MPI_Wtime  and MPI_Wtick ) return an error
       value; C routines as the value of the function and Fortran routines  in
       the last argument.  Before the value is returned, the current MPI error
       handler is called.  By default, this error handler aborts the MPI  job.
       The error handler may be changed with MPI_Comm_set_errhandler (for com‐
       municators),	MPI_File_set_errhandler	    (for      files),	   and
       MPI_Win_set_errhandler	(for   RMA   windows).	  The	MPI-1  routine
       MPI_Errhandler_set may be used but its use is deprecated.   The	prede‐
       fined error handler MPI_ERRORS_RETURN may be used to cause error values
       to be returned.	Note that MPI does not guarentee that an  MPI  program
       can  continue  past an error; however, MPI implementations will attempt
       to continue whenever possible.

       MPI_SUCCESS
	      - No error; MPI routine completed successfully.
       MPI_ERR_ARG
	      - Invalid argument.  Some argument is invalid and is not identi‐
	      fied by a specific error class (e.g., MPI_ERR_RANK ).
       MPI_ERR_COUNT
	      - Invalid count argument.	 Count arguments must be non-negative;
	      a count of zero is often valid.
       MPI_ERR_RANK
	      - Invalid source or destination rank.   Ranks  must  be  between
	      zero  and	 the  size  of	the communicator minus one; ranks in a
	      receive ( MPI_Recv , MPI_Irecv , MPI_Sendrecv , etc.)  may  also
	      be MPI_ANY_SOURCE .

       MPI_ERR_TYPE
	      - Invalid datatype argument.  Additionally, this error can occur
	      if an uncommitted MPI_Datatype (see MPI_Type_commit ) is used in
	      a communication call.
       MPI_ERR_WIN
	      - Invalid MPI window object

SEE ALSO
       MPI_Rget_accumulate MPI_Fetch_and_op

				   11/9/2015		 MPI_Get_accumulate(3)
[top]

List of man pages available for DragonFly

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net