Parallel::Iterator man page on Fedora

Man page or keyword search:  
man Server   31170 pages
apropos Keyword Search (all sections)
Output format
Fedora logo
[printable version]

Parallel::Iterator(3) User Contributed Perl DocumentationParallel::Iterator(3)

NAME
       Parallel::Iterator - Simple parallel execution

VERSION
       This document describes Parallel::Iterator version 1.00

SYNOPSIS
	   use Parallel::Iterator qw( iterate );

	   # A very expensive way to double 100 numbers...

	   my @nums = ( 1 .. 100 );

	   my $iter = iterate( sub {
	       my ( $id, $job ) = @_;
	       return $job * 2;
	   }, \@nums );

	   my @out = ();
	   while ( my ( $index, $value ) = $iter->() ) {
	       $out[$index] = $value;
	   }

DESCRIPTION
       The "map" function applies a user supplied transformation function to
       each element in a list, returning a new list containing the transformed
       elements.

       This module provides a 'parallel map'. Multiple worker processes are
       forked so that many instances of the transformation function may be
       executed simultaneously.

       For time consuming operations, particularly operations that spend most
       of their time waiting for I/O, this is a big performance win. It also
       provides a simple idiom to make effective use of multi CPU systems.

       There is, however, a considerable overhead associated with forking, so
       the example in the synopsis (doubling a list of numbers) is not a
       sensible use of this module.

   Example
       Imagine you have an array of URLs to fetch:

	   my @urls = qw(
	       http://google.com/
	       http://hexten.net/
	       http://search.cpan.org/
	       ... and lots more ...
	   );

       Write a function that retrieves a URL and returns its contents or undef
       if it can't be fetched:

	   sub fetch {
	       my $url = shift;
	       my $resp = $ua->get($url);
	       return unless $resp->is_success;
	       return $resp->content;
	   };

       Now write a function to synthesize a special kind of iterator:

	   sub list_iter {
	       my @ar = @_;
	       my $pos = 0;
	       return sub {
		   return if $pos >= @ar;
		   my @r = ( $pos, $ar[$pos] );	 # Note: returns ( index, value )
		   $pos++;
		   return @r;
	       };
	   }

       The returned iterator will return each element of the array in turn and
       then undef. Actually it returns both the index and the value of each
       element in the array. Because multiple instances of the transformation
       function execute in parallel the results won't necessarily come back in
       order. The array index will later allow us to put completed items in
       the correct place in an output array.

       Get an iterator for the list of URLs:

	   my $url_iter = list_iter( @urls );

       Then wrap it in another iterator which will return the transformed
       results:

	   my $page_iter = iterate( \&fetch, $url_iter );

       Finally loop over the returned iterator storing results:

	   my @out = ( );
	   while ( my ( $index, $value ) = $page_iter->() ) {
	       $out[$index] = $value;
	   }

       Behind the scenes your program forked into ten (by default) instances
       of itself and executed the page requests in parallel.

   Simpler interfaces
       Having to construct an iterator is a pain so "iterate" is smart enough
       to do that for you. Instead of passing an iterator just pass a
       reference to the array:

	   my $page_iter = iterate( \&fetch, \@urls );

       If you pass a hash reference the iterator you get back will return key,
       value pairs:

	   my $some_iter = iterate( \&fetch, \%some_hash );

       If the returned iterator is inconvenient you can get back a hash or
       array instead:

	   my @done = iterate_as_array( \&fetch, @urls );

	   my %done = iterate_as_hash( \&worker, %jobs );

   How It Works
       The current process is forked once for each worker. Each forked child
       is connected to the parent by a pair of pipes. The child's STDIN,
       STDOUT and STDERR are unaffected.

       Input values are serialised (using Storable) and passed to the workers.
       Completed work items are serialised and returned.

   Caveats
       Parallel::Iterator is designed to be simple to use - but the underlying
       forking of the main process can cause mystifying problems unless you
       have an understanding of what is going on behind the scenes.

       Worker execution enviroment

       All code apart from the worker subroutine executes in the parent
       process as normal. The worker executes in a forked instance of the
       parent process. That means that things like this won't work as
       expected:

	   my %tally = ();
	   my @r = iterate_as_array( sub {
	       my ($id, $name) = @_;
	       $tally{$name}++;	      # might not do what you think it does
	       return reverse $name;
	   }, @names );

	   # Now print out the tally...
	   while ( my ( $name, $count ) = each %tally ) {
	       printf("%5d : %s\n", $count, $name);
	   }

       Because the worker is a closure it can see the %tally hash from its
       enclosing scope; but because it's running in a forked clone of the
       parent process it modifies its own copy of %tally rather than the copy
       for the parent process.

       That means that after the job terminates the %tally in the parent
       process will be empty.

       In general you should avoid side effects in your worker subroutines.

       Serialization

       Values are serialised using Storable to pass to the worker subroutine
       and results from the worker are again serialised before being passed
       back. Be careful what your values refer to: everything has to be
       serialised. If there's an indirect way to reach a large object graph
       Storable will find it and performance will suffer.

       To find out how large your serialised values are serialise one of them
       and check its size:

	   use Storable qw( freeze );
	   my $serialized = freeze $some_obj;
	   print length($serialized), " bytes\n";

       In your tests you may wish to guard against the possibility of a change
       to the structure of your values resulting in a sudden increase in
       serialized size:

	   ok length(freeze $some_obj) < 1000, "Object too bulky?";

       See the documetation for Storable for other caveats.

       Performance

       Process forking is expensive. Only use Parallel::Iterator in cases
       where:

       the worker waits for I/O
	   The case of fetching web pages is a good example of this. Fetching
	   a page with LWP::UserAgent may take as long as a few seconds but
	   probably consumes only a few milliseconds of processor time.
	   Running many requests in parallel is a huge win - but be kind to
	   the server you're talking to: don't launch a lot of parallel
	   requests unless it's your server or you know it can handle the
	   load.

       the worker is CPU intensive and you have multiple cores / CPUs
	   If the worker is doing an expensive calculation you can parallelise
	   that across multiple CPU cores. Benchmark first though. There's a
	   considerable overhead associated with Parallel::Iterator; unless
	   your calculations are time consuming that overhead will dwarf
	   whatever time they take.

INTERFACE
   "iterate( [ $options ], $worker, $iterator )"
       Get an iterator that applies the supplied transformation function to
       each value returned by the input iterator.

       Instead of an iterator you may pass an array or hash reference and
       "iterate" will convert it internally into a suitable iterator.

       If you are doing this you may wish to investigate "iterate_as_hash" and
       "iterate_as_array".

       Options

       A reference to a hash of options may be supplied as the first argument.
       The following options are supported:

       "workers"
	   The number of concurrent processes to launch. Set this to 0 to
	   disable forking. Defaults to 10 on systems that support fork and 0
	   (disable forking) on those that do not.

       "nowarn"
	   Normally "iterate" will issue a warning and fall back to single
	   process mode on systems on which fork is not available. This option
	   supresses that warning.

       "batch"
	   Ordinarily items are passed to the worker one at a time. If you are
	   processing a large number of items it may be more efficient to
	   process them in batches. Specify the batch size using this option.

	   Batching is transparent from the caller's perspective. Internally
	   it modifies the iterators and worker (by wrapping them in
	   additional closures) so that they pack, process and unpack chunks
	   of work.

       "adaptive"
	   Extending the idea of batching a number of work items to amortize
	   the overhead of passing work to and from parallel workers you may
	   also ask "iterate" to heuristically determine the batch size by
	   setting the "adaptive" option to a numeric value.

	   The batch size will be computed as

	       <number of items seen> / <number of workers> / <adaptive>

	   A larger value for "adaptive" will reduce the rate at which the
	   batch size increases. Good values tend to be in the range 1 to 2.

	   You can also specify lower and, optionally, upper bounds on the
	   batch size by passing an reference to an array containing ( lower
	   bound, growth ratio, upper bound ). The upper bound may be omitted.

	       my $iter = iterate(
		   { adaptive => [ 5, 2, 100 ] },
		   $worker, \@stuff );

       "onerror"
	   The action to take when an error is thrown in the iterator.
	   Possible values are 'die', 'warn' or a reference to a subroutine
	   that will be called with the index of the job that threw the
	   exception and the value of $@ thrown.

	       iterate( {
		   onerror => sub {
		       my ($id, $err) = @_;
		       $self->log( "Error for index $id: $err" );
		   },
		   $worker,
		   \@jobs
	       );

	   The default is 'die'.

   "iterate_as_array"
       As "iterate" but instead of returning an iterator returns an array
       containing the collected output from the iterator. In a scalar context
       returns a reference to the same array.

       For this to work properly the input iterator must return (index, value)
       pairs. This allows the results to be placed in the correct slots in the
       output array. The simplest way to do this is to pass an array reference
       as the input iterator:

	   my @output = iterate_as_array( \&some_handler, \@input );

   "iterate_as_hash"
       As "iterate" but instead of returning an iterator returns a hash
       containing the collected output from the iterator. In a scalar context
       returns a reference to the same hash.

       For this to work properly the input iterator must return (key, value)
       pairs. This allows the results to be placed in the correct slots in the
       output hash. The simplest way to do this is to pass a hash reference as
       the input iterator:

	   my %output = iterate_as_hash( \&some_handler, \%input );

CONFIGURATION AND ENVIRONMENT
       Parallel::Iterator requires no configuration files or environment
       variables.

DEPENDENCIES
       None.

INCOMPATIBILITIES
       None reported.

BUGS AND LIMITATIONS
       No bugs have been reported.

       Please report any bugs or feature requests to
       "bug-parallel-iterator@rt.cpan.org", or through the web interface at
       <http://rt.cpan.org>.

AUTHOR
       Andy Armstrong  "<andy@hexten.net>"

THANKS
       Aristotle Pagaltzis for the END handling suggestion and patch.

LICENCE AND COPYRIGHT
       Copyright (c) 2007, Andy Armstrong "<andy@hexten.net>". All rights
       reserved.

       This module is free software; you can redistribute it and/or modify it
       under the same terms as Perl itself. See perlartistic.

DISCLAIMER OF WARRANTY
       BECAUSE THIS SOFTWARE IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
       FOR THE SOFTWARE, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT
       WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER
       PARTIES PROVIDE THE SOFTWARE "AS IS" WITHOUT WARRANTY OF ANY KIND,
       EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
       WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
       ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE SOFTWARE IS WITH
       YOU. SHOULD THE SOFTWARE PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL
       NECESSARY SERVICING, REPAIR, OR CORRECTION.

       IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
       WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
       REDISTRIBUTE THE SOFTWARE AS PERMITTED BY THE ABOVE LICENCE, BE LIABLE
       TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL, OR
       CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
       SOFTWARE (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
       RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
       FAILURE OF THE SOFTWARE TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
       SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
       DAMAGES.

perl v5.14.1			  2011-06-20		 Parallel::Iterator(3)
[top]

List of man pages available for Fedora

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net