queue_mover3 man page on DragonFly

Man page or keyword search:  
man Server   44335 pages
apropos Keyword Search (all sections)
Output format
DragonFly logo
[printable version]

QUEUE_MOVER3(1)						       QUEUE_MOVER3(1)

NAME
       queue_mover3 - PgQ consumer that copies data from one queue to another.

SYNOPSIS
       queue_mover3 [switches] config.ini

DESCRIPTION
       queue_mover is PgQ consumer that transports events from source queue
       into target queue. One use case is when events are produced in several
       databases then queue_mover is used to consolidate these events into
       single queue that can then be processed by consumers who need to handle
       theses events. For example in case of patitioned databases it’s
       convenient to move events from each partition into one central queue
       database and then process them there. That way configuration and
       dependancies of partiton databases are simpler and more robust. Another
       use case is to move events from OLTP database to batch processing
       server.

       Transactionality: events will be inserted as one transaction on target
       side. That means only batch_id needs to be tracked on target side.

QUICK-START
       Basic PgQ setup and usage can be summarized by the following steps:

	1. PgQ must be installed both in source and target databases. See
	   pgqadm man page for details.

	2. Target database must also have pgq_ext schema installed. It is used
	   to keep sync between two databases.

	3. Create a queue_mover configuration file, say
	   qmover_sourceq_to_targetdb.ini

	4. create source and target queues

	       $ pgqadm.py sourcedb_ticker.ini create <srcqueue>
	       $ pgqadm.py targetdb_ticker.ini create <dstqueue>

	5. launch queue mover in daemon mode

	       $ queue_mover3 -d qmover_sourceq_to_targetdb.ini

	6. start producing and consuming events

CONFIG
   Common configuration parameters
       job_name
	   Name for particulat job the script does. Script will log under this
	   name to logdb/logserver. The name is also used as default for PgQ
	   consumer name. It should be unique.

       pidfile
	   Location for pid file. If not given, script is disallowed to
	   daemonize.

       logfile
	   Location for log file.

       loop_delay
	   If continuisly running process, how long to sleep after each work
	   loop, in seconds. Default: 1.

       connection_lifetime
	   Close and reconnect older database connections.

       use_skylog
	   foo.

   Common PgQ consumer parameters
       queue_name
	   Queue name to attach to. No default.

       consumer_name
	   Consumers ID to use when registering. Default: %(job_name)s

   queue_mover parameters
       src_db
	   Source database.

       dst_db
	   Target database.

       dst_queue_name
	   Target queue name.

   Example config file
	   [queue_mover3]
	   job_name = eventlog_to_target_mover
	   src_db = dbname=sourcedb
	   dst_db = dbname=targetdb
	   pgq_queue_name = eventlog
	   dst_queue_name = copy_of_eventlog
	   pidfile = log/%(job_name)s.pid
	   logfile = pid/%(job_name)s.log

COMMAND LINE SWITCHES
       Following switches are common to all skytools.DBScript-based Python
       programs.

       -h, --help
	   show help message and exit

       -q, --quiet
	   make program silent

       -v, --verbose
	   make program more verbose

       -d, --daemon
	   make program go background

       --ini
	   show commented template config file.

       Following switches are used to control already running process. The
       pidfile is read from config then signal is sent to process id specified
       there.

       -r, --reload
	   reload config (send SIGHUP)

       -s, --stop
	   stop program safely (send SIGINT)

       -k, --kill
	   kill program immidiately (send SIGTERM)

BUGS
       Event ID is not kept on target side. If needed is can be kept, then
       event_id seq at target side need to be increased by hand to inform
       ticker about new events.

				  04/01/2014		       QUEUE_MOVER3(1)
[top]

List of man pages available for DragonFly

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net