fio man page on DragonFly

Man page or keyword search:  
man Server   44335 pages
apropos Keyword Search (all sections)
Output format
DragonFly logo
[printable version]

fio(1)									fio(1)

NAME
       fio - flexible I/O tester

SYNOPSIS
       fio [options] [jobfile]...

DESCRIPTION
       fio  is a tool that will spawn a number of threads or processes doing a
       particular type of I/O action as specified by the  user.	  The  typical
       use  of	fio  is to write a job file matching the I/O load one wants to
       simulate.

OPTIONS
       --debug=type
	      Enable verbose tracing of various fio actions. May be `all'  for
	      all   types  or  individual  types  separated  by	 a  comma  (eg
	      --debug=io,file).	 `help'	 will  list  all   available   tracing
	      options.

       --output=filename
	      Write output to filename.

       --output-format=format
	      Set  the reporting format to normal, terse, json, or json+. Mul‐
	      tiple formats can be selected, separate by a comma. terse	 is  a
	      CSV based format. json+ is like json, except it adds a full dump
	      of the latency buckets.

       --runtime=runtime
	      Limit run time to runtime seconds.

       --bandwidth-log
	      Generate per-job bandwidth logs.

       --minimal
	      Print statistics in a terse, semicolon-delimited format.

       --append-terse
	      Print statistics in selected mode AND terse, semicolon-delimited
	      format.	Deprecated, use --output-format instead to select mul‐
	      tiple formats.

       --version
	      Display version information and exit.

       --terse-version=version
	      Set terse version output format (Current	version	 3,  or	 older
	      version 2).

       --help Display usage information and exit.

       --cpuclock-test
	      Perform test and validation of internal CPU clock

       --crctest[=test]
	      Test  the	 speed	of  the	 builtin checksumming functions. If no
	      argument is given, all of them are tested. Or a comma  separated
	      list can be passed, in which case the given ones are tested.

       --cmdhelp=command
	      Print  help  information for command.  May be `all' for all com‐
	      mands.

       --enghelp=ioengine[,command]
	      List all commands defined by ioengine, or print help for command
	      defined by ioengine.

       --showcmd=jobfile
	      Convert jobfile to a set of command-line options.

       --eta=when
	      Specifies	 when  real-time ETA estimate should be printed.  when
	      may be one of `always', `never' or `auto'.

       --eta-newline=time
	      Force an ETA newline for every `time` period passed.

       --status-interval=time
	      Report full output status every `time` period passed.

       --readonly
	      Turn on safety read-only checks, preventing any attempted write.

       --section=sec
	      Only run section sec from job file. This option can be used mul‐
	      tiple times to add more sections to run.

       --alloc-size=kb
	      Set the internal smalloc pool size to kb kilobytes.

       --warnings-fatal
	      All  fio	parser warnings are fatal, causing fio to exit with an
	      error.

       --max-jobs=nr
	      Set the maximum allowed number of	 jobs  (threads/processes)  to
	      support.

       --server=args
	      Start  a backend server, with args specifying what to listen to.
	      See client/server section.

       --daemonize=pidfile
	      Background a fio server, writing the pid to the given pid file.

       --client=host
	      Instead of running the jobs locally, send and run	 them  on  the
	      given host or set of hosts.  See client/server section.

       --idle-prof=option
	      Report  cpu  idleness  on	 a system or percpu basis (option=sys‐
	      tem,percpu) or run  unit	work  calibration  only	 (option=cali‐
	      brate).

JOB FILE FORMAT
       Job  files are in `ini' format. They consist of one or more job defini‐
       tions, which begin with a job name in square brackets and extend to the
       next  job  name.	 The job name can be any ASCII string except `global',
       which has a special meaning.  Following the job name is a  sequence  of
       zero  or more parameters, one per line, that define the behavior of the
       job.  Any line starting with a `;' or `#'  character  is	 considered  a
       comment and ignored.

       If jobfile is specified as `-', the job file will be read from standard
       input.

   Global Section
       The global section contains default parameters for  jobs	 specified  in
       the job file.  A job is only affected by global sections residing above
       it, and there may be any number of global sections.  Specific job defi‐
       nitions may override any parameter set in global sections.

JOB PARAMETERS
   Types
       Some  parameters	 may  take  arguments  of a specific type.  Anywhere a
       numeric value is required, an arithmetic expression may be  used,  pro‐
       vided it is surrounded by parentheses. Supported operators are:

		     addition (+)

		     subtraction (-)

		     multiplication (*)

		     division (/)

		     modulus (%)

		     exponentiation (^)

       For time values in expressions, units are microseconds by default. This
       is different than for time values not in expressions (not  enclosed  in
       parentheses). The types used are:

       str    String: a sequence of alphanumeric characters.

       int    SI  integer: a whole number, possibly containing a suffix denot‐
	      ing the base unit of the value.  Accepted suffixes are `k', 'M',
	      'G',  'T',  and  'P',  denoting kilo (1024), mega (1024^2), giga
	      (1024^3), tera (1024^4), and peta (1024^5) respectively. If pre‐
	      fixed  with  '0x', the value is assumed to be base 16 (hexadeci‐
	      mal). A suffix may include a trailing 'b', for instance 'kb'  is
	      identical	 to  'k'.  You	can  specify  a base 10 value by using
	      'KiB', 'MiB','GiB', etc. This is useful for  disk	 drives	 where
	      values  are  often  given	 in base 10 values. Specifying '30GiB'
	      will get you 30*1000^3 bytes.  When specifying times the default
	      suffix  meaning  changes,	 still	denoting  the base unit of the
	      value, but accepted suffixes are 'D' (days),  'H'	 (hours),  'M'
	      (minutes),  'S'  Seconds, 'ms' (or msec) milli seconds, 'us' (or
	      'usec') micro seconds. Time values without a unit	 specify  sec‐
	      onds.  The suffixes are not case sensitive.

       bool   Boolean:	a  true or false value. `0' denotes false, `1' denotes
	      true.

       irange Integer range: a range  of  integers  specified  in  the	format
	      lower:upper or lower-upper. lower and upper may contain a suffix
	      as described above.  If an option allows	two  sets  of  ranges,
	      they  are	 separated  with  a `,' or `/' character. For example:
	      `8-8k/8M-4G'.

       float_list
	      List of floating numbers: A list of floating numbers,  separated
	      by a ':' character.

   Parameter List
       name=str
	      May be used to override the job name.  On the command line, this
	      parameter has the special purpose of signalling the start	 of  a
	      new job.

       wait_for=str
	      Specifies	 the name of the already defined job to wait for. Sin‐
	      gle waitee name only may be specified. If set, the job won't  be
	      started  until all workers of the waitee job are done.  Wait_for
	      operates on the job name basis, so there are a few  limitations.
	      First, the waitee must be defined prior to the waiter job (mean‐
	      ing no forward references). Second, if a job is being referenced
	      as a waitee, it must have a unique name (no duplicate waitees).

       description=str
	      Human-readable  description  of  the job. It is printed when the
	      job is run, but otherwise has no special purpose.

       directory=str
	      Prefix filenames with this directory.  Used to place files in  a
	      location	other than `./'.  You can specify a number of directo‐
	      ries by separating the names with a ':' character. These	direc‐
	      tories  will  be assigned equally distributed to job clones cre‐
	      ates with numjobs as long as they are using generated filenames.
	      If  specific  filename(s)	 are set fio will use the first listed
	      directory, and thereby matching  the   filename  semantic	 which
	      generates a file each clone if not specified, but let all clones
	      use the same if set. See filename for  considerations  regarding
	      escaping certain characters on some platforms.

       filename=str
	      fio  normally makes up a file name based on the job name, thread
	      number, and file number. If you  want  to	 share	files  between
	      threads in a job or several jobs, specify a filename for each of
	      them to override the default.  If the I/O engine is  file-based,
	      you can specify a number of files by separating the names with a
	      `:' character. `-' is a reserved name, meaning stdin or  stdout,
	      depending	 on  the  read/write  direction	 set. On Windows, disk
	      devices are accessed as \.PhysicalDrive0 for the	first  device,
	      \.PhysicalDrive1	for  the second etc. Note: Windows and FreeBSD
	      prevent write access to areas of the disk containing in-use data
	      (e.g.  filesystems). If the wanted filename does need to include
	      a colon, then escape that with a '\' character. For instance, if
	      the  filename  is "/dev/dsk/foo@3,0:c", then you would use file‐
	      name="/dev/dsk/foo@3,0\:c".

       filename_format=str
	      If sharing multiple files between jobs, it is usually  necessary
	      to  have fio generate the exact names that you want. By default,
	      fio will name a file based on the default file format specifica‐
	      tion of jobname.jobnumber.filenumber. With this option, that can
	      be customized. Fio will recognize and replace the following key‐
	      words in this string:

		     $jobname
			    The name of the worker thread or process.

		     $jobnum
			    The	 incremental  number  of  the worker thread or
			    process.

		     $filenum
			    The incremental number of the file for that worker
			    thread or process.

	      To  have dependent jobs share a set of files, this option can be
	      set to have fio generate filenames that are shared  between  the
	      two. For instance, if testfiles.$filenum is specified, file num‐
	      ber 4 for any job will be	 named	testfiles.4.  The  default  of
	      $jobname.$jobnum.$filenum will be used if no other format speci‐
	      fier is given.

       lockfile=str
	      Fio defaults to not locking any files before it does IO to them.
	      If  a file or file descriptor is shared, fio can serialize IO to
	      that file to make the end result consistent. This is  usual  for
	      emulating real workloads that share files.  The lock modes are:

		     none   No locking. This is the default.

		     exclusive
			    Only  one  thread  or process may do IO at a time,
			    excluding all others.

		     readwrite
			    Read-write locking on the file. Many  readers  may
			    access  the	 file at the same time, but writes get
			    exclusive access.

       opendir=str Recursively open any files below directory str.

       readwrite=str, rw=str
	      Type of I/O pattern.  Accepted values are:

		     read   Sequential reads.

		     write  Sequential writes.

		     trim   Sequential trim (Linux block devices only).

		     randread
			    Random reads.

		     randwrite
			    Random writes.

		     randtrim
			    Random trim (Linux block devices only).

		     rw, readwrite
			    Mixed sequential reads and writes.

		     randrw Mixed random reads and writes.

		     trimwrite
			    Trim and write  mixed  workload.  Blocks  will  be
			    trimmed  first, then the same blocks will be writ‐
			    ten to.

	      For mixed I/O, the default split is 50/50. For certain types  of
	      io  the result may still be skewed a bit, since the speed may be
	      different. It is possible to specify a  number  of  IO's	to  do
	      before  getting a new offset, this is done by appending a `:<nr>
	      to the end of the string given. For a random read, it would look
	      like  rw=randread:8  for	passing	 in  an offset modifier with a
	      value of 8. If the postfix is used with a sequential IO pattern,
	      then  the	 value specified will be added to the generated offset
	      for each IO. For instance, using rw=write:4k will	 skip  4k  for
	      every  write.  It	 turns	sequential  IO into sequential IO with
	      holes. See the rw_sequencer option.

       rw_sequencer=str
	      If an offset modifier is given by	 appending  a  number  to  the
	      rw=<str>	line,  then this option controls how that number modi‐
	      fies the IO offset being generated. Accepted values are:

		     sequential
			    Generate sequential offset

		     identical
			    Generate the same offset

	      sequential is only useful for random IO, where  fio  would  nor‐
	      mally  generate  a new random offset for every IO. If you append
	      eg 8 to randread, you would get a new random offset for every  8
	      IO's.  The result would be a seek for only every 8 IO's, instead
	      of for every IO. Use rw=randread:8 to specify that.  As  sequen‐
	      tial IO is already sequential, setting sequential for that would
	      not result in any differences.  identical behaves in  a  similar
	      fashion,	except	it  sends  the	same  offset 8 number of times
	      before generating a new offset.

       kb_base=int
	      The base unit for a kilobyte. The defacto base  is  2^10,	 1024.
	      Storage  manufacturers  like  to	use 10^3 or 1000 as a base ten
	      unit instead, for obvious reasons. Allowed values	 are  1024  or
	      1000, with 1024 being the default.

       unified_rw_reporting=bool
	      Fio  normally  reports statistics on a per data direction basis,
	      meaning that read, write, and trim are  accounted	 and  reported
	      separately.  If  this  option  is	 set  fio sums the results and
	      reports them as "mixed" instead.

       randrepeat=bool
	      Seed the random number generator used for random I/O patterns in
	      a	 predictable  way  so  the  pattern is repeatable across runs.
	      Default: true.

       allrandrepeat=bool
	      Seed all random  number  generators  in  a  predictable  way  so
	      results are repeatable across runs.  Default: false.

       randseed=int
	      Seed  the	 random number generators based on this seed value, to
	      be able to control what sequence of output is  being  generated.
	      If  not  set, the random sequence depends on the randrepeat set‐
	      ting.

       fallocate=str
	      Whether pre-allocation is	 performed  when  laying  down	files.
	      Accepted values are:

		     none   Do not pre-allocate space.

		     posix  Pre-allocate via posix_fallocate(3).

		     keep   Pre-allocate    via	   fallocate(2)	   with	  FAL‐
			    LOC_FL_KEEP_SIZE set.

		     0	    Backward-compatible alias for 'none'.

		     1	    Backward-compatible alias for 'posix'.

	      May not be available on all supported platforms. 'keep' is  only
	      available	 on Linux. If using ZFS on Solaris this must be set to
	      'none' because ZFS doesn't support it. Default: 'posix'.

       fadvise_hint=bool
	      Use posix_fadvise(2) to advise the kernel what I/O patterns  are
	      likely to be issued. Default: true.

       fadvise_stream=int
	      Use  posix_fadvise(2)  to	 advise	 the kernel what stream ID the
	      writes issued belong to. Only supported  on  Linux.  Note,  this
	      option may change going forward.

       size=int
	      Total  size  of  I/O for this job.  fio will run until this many
	      bytes have been transferred, unless  limited  by	other  options
	      (runtime,	 for  instance,	 or  increased/descreased by io_size).
	      Unless nrfiles and filesize options are given, this amount  will
	      be  divided between the available files for the job. If not set,
	      fio will use the full size of the given files or devices. If the
	      files  do	 not exist, size must be given. It is also possible to
	      give size as a percentage between 1  and	100.  If  size=20%  is
	      given,  fio  will use 20% of the full size of the given files or
	      devices.

       io_size=int, io_limit =int
	      Normally fio operates within the region set by size, which means
	      that  the	 size option sets both the region and size of IO to be
	      performed.  Sometimes that is  not  what	you  want.  With  this
	      option,  it is possible to define just the amount of IO that fio
	      should do. For instance, if size is set to 20G and  io_limit  is
	      set  to  5G,  fio	 will perform IO within the first 20G but exit
	      when 5G have been done. The opposite is also possible - if  size
	      is  set  to 20G, and io_size is set to 40G, then fio will do 40G
	      of IO within the 0..20G region.

       fill_device=bool, fill_fs=bool
	      Sets size to something really large and  waits  for  ENOSPC  (no
	      space  left  on device) as the terminating condition. Only makes
	      sense with sequential write.  For a  read	 workload,  the	 mount
	      point  will  be filled first then IO started on the result. This
	      option doesn't make sense if operating on	 a  raw	 device	 node,
	      since  the  size	of  that  is already known by the file system.
	      Additionally,  writing  beyond  end-of-device  will  not	return
	      ENOSPC there.

       filesize=irange
	      Individual  file	sizes.	May be a range, in which case fio will
	      select sizes for files at random within the given range, limited
	      to  size	in total (if that is given). If filesize is not speci‐
	      fied, each created file is the same size.

       file_append=bool
	      Perform IO after the end of the file. Normally fio will  operate
	      within  the size of a file. If this option is set, then fio will
	      append to the file instead. This has identical behavior to  set‐
	      ting  offset  to	the  size of a file. This option is ignored on
	      non-regular files.

       blocksize=int[,int], bs=int[,int]
	      Block size for I/O  units.   Default:  4k.   Values  for	reads,
	      writes,  and  trims  can	be  specified separately in the format
	      read,write,trim either of which may be empty to leave that value
	      at  its  default. If a trailing comma isn't given, the remainder
	      will inherit the last value set.

       blocksize_range=irange[,irange], bsrange=irange[,irange]
	      Specify a range of I/O block sizes.  The issued  I/O  unit  will
	      always  be  a  multiple  of  the	minimum	 size,	unless	block‐
	      size_unaligned is set.  Applies to both reads and writes if only
	      one range is given, but can be specified separately with a comma
	      separating the values. Example: bsrange=1k-4k,2k-8k.  Also  (see
	      blocksize).

       bssplit=str
	      This option allows even finer grained control of the block sizes
	      issued, not just even splits between them. With this option, you
	      can  weight  various block sizes for exact control of the issued
	      IO for a job that has mixed  block  sizes.  The  format  of  the
	      option  is  bssplit=blocksize/percentage,	 optionally  adding as
	      many definitions as needed separated by a colon.	Example:  bss‐
	      plit=4k/10:64k/50:32k/40	would  issue  50%  64k	blocks, 10% 4k
	      blocks and 40% 32k blocks. bssplit also supports giving separate
	      splits  to reads and writes. The format is identical to what the
	      bs option accepts, the read and write parts are separated with a
	      comma.

       blocksize_unaligned, bs_unaligned
	      If set, any size in blocksize_range may be used.	This typically
	      won't work with direct I/O, as  that  normally  requires	sector
	      alignment.

       blockalign=int[,int], ba=int[,int]
	      At  what	boundary  to  align random IO offsets. Defaults to the
	      same as 'blocksize' the minimum blocksize given.	Minimum align‐
	      ment  is	typically  512b for using direct IO, though it usually
	      depends on the hardware block size.   This  option  is  mutually
	      exclusive with using a random map for files, so it will turn off
	      that option.

       bs_is_seq_rand=bool
	      If this option is set, fio will use the normal read,write block‐
	      size  settings  as sequential,random instead. Any random read or
	      write will use the WRITE blocksize settings, and any  sequential
	      read or write will use the READ blocksize setting.

       zero_buffers
	      Initialize  buffers  with	 all zeros. Default: fill buffers with
	      random data.

       refill_buffers
	      If this option is given, fio will refill the IO buffers on every
	      submit.  The  default  is to only fill it at init time and reuse
	      that data. Only makes sense  if  zero_buffers  isn't  specified,
	      naturally.  If  data  verification is enabled, refill_buffers is
	      also automatically enabled.

       scramble_buffers=bool
	      If refill_buffers is too costly and the  target  is  using  data
	      deduplication, then setting this option will slightly modify the
	      IO buffer contents to defeat normal de-dupe  attempts.  This  is
	      not enough to defeat more clever block compression attempts, but
	      it will stop naive dedupe of blocks. Default: true.

       buffer_compress_percentage=int
	      If this is set, then fio will attempt to provide IO buffer  con‐
	      tent  (on WRITEs) that compress to the specified level. Fio does
	      this by providing a mix of random data and a fixed pattern.  The
	      fixed pattern is either zeroes, or the pattern specified by buf‐
	      fer_pattern. If the pattern option is used, it  might  skew  the
	      compression  ratio  slightly.  Note  that this is per block size
	      unit, for file/disk wide compression  level  that	 matches  this
	      setting.	Note  that  this is per block size unit, for file/disk
	      wide compression level that matches this	setting,  you'll  also
	      want to set refill_buffers.

       buffer_compress_chunk=int
	      See  buffer_compress_percentage. This setting allows fio to man‐
	      age how big the ranges of random data and zeroed data is.	 With‐
	      out  this	 set,  fio  will provide buffer_compress_percentage of
	      blocksize random data, followed by the  remaining	 zeroed.  With
	      this set to some chunk size smaller than the block size, fio can
	      alternate random and zeroed data throughout the IO buffer.

       buffer_pattern=str
	      If set, fio will fill the IO buffers with this pattern.  If  not
	      set,  the contents of IO buffers is defined by the other options
	      related to buffer contents. The setting can be  any  pattern  of
	      bytes,  and  can be prefixed with 0x for hex values. It may also
	      be a string, where the string must  then	be  wrapped  with  "",
	      e.g.:
		     buffer_pattern="abcd"
			    or
		     buffer_pattern=-12
			    or
		     buffer_pattern=0xdeadface

	      Also you can combine everything together in any order:

		     buffer_pattern=0xdeadface"abcd"-12

       dedupe_percentage=int
	      If  set,	fio will generate this percentage of identical buffers
	      when writing.  These buffers will be  naturally  dedupable.  The
	      contents	of the buffers depend on what other buffer compression
	      settings have been set. It's possible  to	 have  the  individual
	      buffers  either  fully  compressible, or not at all. This option
	      only controls the distribution of unique buffers.

       nrfiles=int
	      Number of files to use for this job.  Default: 1.

       openfiles=int
	      Number of files  to  keep	 open  at  the	same  time.   Default:
	      nrfiles.

       file_service_type=str
	      Defines  how files to service are selected.  The following types
	      are defined:

		     random Choose a file at random.

		     roundrobin
			    Round robin over opened files (default).

		     sequential
			    Do each file in the set sequentially.

	      The number of I/Os to issue before switching to a new  file  can
	      be specified by appending `:int' to the service type.

       ioengine=str
	      Defines  how  the	 job  issues  I/O.   The  following  types are
	      defined:

		     sync   Basic read(2) or write(2) I/O.  fseek(2)  is  used
			    to position the I/O location.

		     psync  Basic pread(2) or pwrite(2) I/O.

		     vsync  Basic  readv(2)  or	 writev(2)  I/O.  Will emulate
			    queuing by coalescing adjacent IOs into  a	single
			    submission.

		     pvsync Basic preadv(2) or pwritev(2) I/O.

		     libaio Linux   native  asynchronous  I/O.	This  ioengine
			    defines engine specific options.

		     posixaio
			    POSIX  asynchronous	 I/O  using  aio_read(3)   and
			    aio_write(3).

		     solarisaio
			    Solaris native asynchronous I/O.

		     windowsaio
			    Windows native asynchronous I/O.

		     mmap   File is memory mapped with mmap(2) and data copied
			    using memcpy(3).

		     splice splice(2)  is  used	 to  transfer  the  data   and
			    vmsplice(2)	 to  transfer  data from user-space to
			    the kernel.

		     syslet-rw
			    Use	 the  syslet  system  calls  to	 make  regular
			    read/write asynchronous.

		     sg	    SCSI  generic sg v3 I/O. May be either synchronous
			    using the SG_IO ioctl, or if the target is	an  sg
			    character  device, we use read(2) and write(2) for
			    asynchronous I/O.

		     null   Doesn't  transfer  any  data,  just	 pretends  to.
			    Mainly  used to exercise fio itself and for debug‐
			    ging and testing purposes.

		     net    Transfer over the network.	 The  protocol	to  be
			    used  can  be defined with the protocol parameter.
			    Depending on  the  protocol,  filename,  hostname,
			    port,  or listen must be specified.	 This ioengine
			    defines engine specific options.

		     netsplice
			    Like net, but uses splice(2)  and  vmsplice(2)  to
			    map	 data  and send/receive. This ioengine defines
			    engine specific options.

		     cpuio  Doesn't transfer any data, but  burns  CPU	cycles
			    according to cpuload and cpucycles parameters.

		     guasi  The	 GUASI	I/O  engine  is	 the Generic Userspace
			    Asynchronous Syscall Interface approach  to	 asyn‐
			    chronous I/O.
			    See <http://www.xmailserver.org/guasi-lib.html>.

		     rdma   The	 RDMA  I/O  engine  supports  both RDMA memory
			    semantics	(RDMA_WRITE/RDMA_READ)	 and   channel
			    semantics (Send/Recv) for the InfiniBand, RoCE and
			    iWARP protocols.

		     external
			    Loads an external I/O engine object file.	Append
			    the engine filename as `:enginepath'.

		     falloc
			       IO engine that does regular linux native fallo‐
			    cate  call	to  simulate  data  transfer  as   fio
			    ioengine
			      DDIR_READ	    does    fallocate(,mode   =	  FAL‐
			    LOC_FL_KEEP_SIZE,)
			      DIR_WRITE does fallocate(,mode = 0)
			      DDIR_TRIM	  does	  fallocate(,mode    =	  FAL‐
			    LOC_FL_KEEP_SIZE|FALLOC_FL_PUNCH_HOLE)

		     e4defrag
			    IO	engine	that  does  regular  EXT4_IOC_MOVE_EXT
			    ioctls to simulate defragment activity request  to
			    DDIR_WRITE event

		     rbd    IO	engine	supporting direct access to Ceph Rados
			    Block Devices (RBD) via librbd without the need to
			    use	 the  kernel rbd driver. This ioengine defines
			    engine specific options.

		     gfapi  Using Glusterfs libgfapi sync interface to	direct
			    access  to	Glusterfs volumes without having to go
			    through FUSE. This ioengine	 defines  engine  spe‐
			    cific options.

		     gfapi_async
			    Using Glusterfs libgfapi async interface to direct
			    access to Glusterfs volumes without having	to  go
			    through  FUSE.  This  ioengine defines engine spe‐
			    cific options.

		     libhdfs
			    Read and write through Hadoop (HDFS).   The	 file‐
			    name  option  is  used to specify host,port of the
			    hdfs name-node to connect. This engine  interprets
			    offsets  a little differently. In HDFS, files once
			    created cannot be modified.	 So random writes  are
			    not	 possible.  To	imitate	 this,	libhdfs engine
			    expects bunch of small files to  be	 created  over
			    HDFS,  and engine will randomly pick a file out of
			    those files based on the offset generated  by  fio
			    backend.  (see the example job file to create such
			    files, use	rw=write  option).  Please  note,  you
			    might  want to set necessary environment variables
			    to work with hdfs/libhdfs properly.

		     mtd    Read, write and  erase  an	MTD  character	device
			    (e.g., /dev/mtd0). Discards are treated as erases.
			    Depending on the underlying device type,  the  I/O
			    may	 have  to  go  in  a certain pattern, e.g., on
			    NAND, writing sequentially	to  erase  blocks  and
			    discarding	before overwriting. The writetrim mode
			    works well for this constraint.

       iodepth=int
	      Number of I/O units to keep in flight  against  the  file.  Note
	      that  increasing	iodepth	 beyond	 1 will not affect synchronous
	      ioengines (except for small  degress  when  verify_async	is  in
	      use).  Even async engines may impose OS restrictions causing the
	      desired depth not to be achieved.	 This may happen on Linux when
	      using  libaio and not setting direct=1, since buffered IO is not
	      async on that OS. Keep an eye on the IO  depth  distribution  in
	      the fio output to verify that the achieved depth is as expected.
	      Default: 1.

       iodepth_batch=int, iodepth_batch_submit=int
	      This defines how many  pieces  of	 IO  to	 submit	 at  once.  It
	      defaults	to  1 which means that we submit each IO as soon as it
	      is available, but can be raised to submit bigger batches	of  IO
	      at the time. If it is set to 0 the iodepth value will be used.

       iodepth_batch_complete_min=int, iodepth_batch_complete=int
	      This  defines  how  many	pieces	of  IO to retrieve at once. It
	      defaults to 1 which
	       means that we'll ask for a minimum of 1	IO  in	the  retrieval
	      process  from  the  kernel. The IO retrieval will go on until we
	      hit the limit set by iodepth_low. If this variable is set to  0,
	      then  fio	 will always check for completed events before queuing
	      more IO. This helps reduce IO  latency,  at  the	cost  of  more
	      retrieval system calls.

       iodepth_batch_complete_max=int
	      This  defines  maximum  pieces  of  IO to retrieve at once. This
	      variable	should	 be   used   along   with   iodepth_batch_com‐
	      plete_min=int  variable,	specifying  the	 range	of min and max
	      amount of IO which should be retrieved. By default it  is	 equal
	      to iodepth_batch_complete_min value.

	      Example #1:
		     iodepth_batch_complete_min=1

		     iodepth_batch_complete_max=<iodepth>

	      which  means  that  we  will retrieve at leat 1 IO and up to the
	      whole submitted queue depth. If none of IO  has  been  completed
	      yet, we will wait.

	      Example #2:
		     iodepth_batch_complete_min=0

		     iodepth_batch_complete_max=<iodepth>

	      which means that we can retrieve up to the whole submitted queue
	      depth, but if none of IO has been completed  yet,	 we  will  NOT
	      wait  and	 immediately  exit the system call. In this example we
	      simply do polling.

       iodepth_low=int
	      Low watermark indicating when to start filling the queue	again.
	      Default: iodepth.

       io_submit_mode=str
	      This  option  controls  how fio submits the IO to the IO engine.
	      The default is inline, which means that the fio job threads sub‐
	      mit  and	reap  IO directly.  If set to offload, the job threads
	      will offload IO submission to a dedicated pool  of  IO  threads.
	      This  requires  some  coordination  and  thus has a bit of extra
	      overhead, especially for lower  queue  depth  IO	where  it  can
	      increase	latencies.  The benefit is that fio can manage submis‐
	      sion rates independently of the device  completion  rates.  This
	      avoids skewed latency reporting if IO gets back up on the device
	      side (the coordinated omission problem).

       direct=bool
	      If true, use  non-buffered  I/O  (usually	 O_DIRECT).   Default:
	      false.

       atomic=bool
	      If value is true, attempt to use atomic direct IO. Atomic writes
	      are guaranteed to be stable once acknowledged by	the  operating
	      system. Only Linux supports O_ATOMIC right now.

       buffered=bool
	      If  true,	 use buffered I/O.  This is the opposite of the direct
	      parameter.  Default: true.

       offset=int
	      Offset in the file to start I/O. Data before the offset will not
	      be touched.

       offset_increment=int
	      If  this	is provided, then the real offset becomes the offset +
	      offset_increment * thread_number, where the thread number	 is  a
	      counter  that  starts  at	 0 and is incremented for each sub-job
	      (i.e. when numjobs option is specified). This option  is	useful
	      if  there	 are  several  jobs which are intended to operate on a
	      file in parallel disjoint segments, with	even  spacing  between
	      the starting points.

       number_ios=int
	      Fio will normally perform IOs until it has exhausted the size of
	      the region set by size, or if it exhaust the allocated time  (or
	      hits  an error condition). With this setting, the range/size can
	      be set independently of the number of IOs to perform.  When  fio
	      reaches  this  number,  it will exit normally and report status.
	      Note that this does not extend the amount of  IO	that  will  be
	      done,  it	 will  only  stop  fio if this condition is met before
	      other end-of-job criteria.

       fsync=int
	      How many I/Os to perform before issuing  an  fsync(2)  of	 dirty
	      data.  If 0, don't sync.	Default: 0.

       fdatasync=int
	      Like  fsync, but uses fdatasync(2) instead to only sync the data
	      parts of the file. Default: 0.

       write_barrier=int
	      Make every Nth write a barrier write.

       sync_file_range=str:int
	      Use sync_file_range(2) for every val number of write operations.
	      Fio will track range of writes that have happened since the last
	      sync_file_range(2) call.	str can currently be one or more of:

	      wait_before
		     SYNC_FILE_RANGE_WAIT_BEFORE

	      write  SYNC_FILE_RANGE_WRITE

	      wait_after
		     SYNC_FILE_RANGE_WRITE

	      So if  you  do  sync_file_range=wait_before,write:8,  fio
	      would use
	      SYNC_FILE_RANGE_WAIT_BEFORE  | SYNC_FILE_RANGE_WRITE for every 8
	      writes.  Also see the sync_file_range(2) man page.  This	option
	      is Linux specific.

       overwrite=bool
	      If  writing,  setup  the file first and do overwrites.  Default:
	      false.

       end_fsync=bool
	      Sync file contents when a write stage has	 completed.   Default:
	      false.

       fsync_on_close=bool
	      If  true,	 sync  file  contents  on  close.   This  differs from
	      end_fsync in that it will happen on every close, not just at the
	      end of the job.  Default: false.

       rwmixread=int
	      Percentage  of  a	 mixed workload that should be reads. Default:
	      50.

       rwmixwrite=int
	      Percentage of a  mixed  workload	that  should  be  writes.   If
	      rwmixread	 and  rwmixwrite are given and do not sum to 100%, the
	      latter of the two overrides the first. This may interfere with a
	      given  rate setting, if fio is asked to limit reads or writes to
	      a certain rate. If that is the case, then the  distribution  may
	      be skewed. Default: 50.

       random_distribution=str:float
	      By  default,  fio will use a completely uniform random distribu‐
	      tion when asked to perform random IO. Sometimes it is useful  to
	      skew the distribution in specific ways, ensuring that some parts
	      of the data is more hot than others.  Fio includes the following
	      distribution models:

	      random Uniform random distribution

	      zipf   Zipf distribution

	      pareto Pareto distribution

	      When  using a zipf or pareto distribution, an input value
	      is also needed to
	      define the access pattern. For zipf, this is the zipf theta. For
	      pareto, it's the pareto power. Fio includes a test program, gen‐
	      zipf, that can be used visualize what  the  given	 input	values
	      will  yield  in  terms  of hit rates.  If you wanted to use zipf
	      with a theta of 1.2, you would use  random_distribution=zipf:1.2
	      as  the option. If a non-uniform model is used, fio will disable
	      use of the random map.

       percentage_random=int
	      For a random workload, set how big a percentage should  be  ran‐
	      dom.  This defaults to 100%, in which case the workload is fully
	      random. It can be set from anywhere from 0 to 100.   Setting  it
	      to 0 would make the workload fully sequential. It is possible to
	      set different values for reads, writes, and trim. To do so, sim‐
	      ply use a comma separated list. See blocksize.

       norandommap
	      Normally	fio will cover every block of the file when doing ran‐
	      dom I/O. If this parameter is given, a new offset will be chosen
	      without looking at past I/O history.  This parameter is mutually
	      exclusive with verify.

       softrandommap=bool
	      See norandommap. If fio runs with the random block  map  enabled
	      and  it fails to allocate the map, if this option is set it will
	      continue without a random block map. As coverage will not be  as
	      complete	as  with  random  maps,	 this  option  is  disabled by
	      default.

       random_generator=str
	      Fio supports the following engines for generating IO offsets for
	      random IO:

	      tausworthe
		     Strong 2^88 cycle random number generator

	      lfsr   Linear feedback shift register generator

	      tausworthe64
		     Strong 64-bit 2^258 cycle random number generator

	      Tausworthe  is  a	 strong random number generator, but it
	      requires tracking on the
	      side if we want to ensure that blocks are only read  or  written
	      once.  LFSR  guarantees  that  we never generate the same offset
	      twice, and it's also less computationally expensive. It's not  a
	      true random generator, however, though for IO purposes it's typ‐
	      ically good enough. LFSR only works with single block sizes, not
	      with  workloads that use multiple block sizes. If used with such
	      a workload, fio may read or write some blocks multiple times.

       nice=int
	      Run job with given nice value.  See nice(2).

       prio=int
	      Set I/O priority value of this job between  0  (highest)	and  7
	      (lowest).	 See ionice(1).

       prioclass=int
	      Set I/O priority class.  See ionice(1).

       thinktime=int
	      Stall job for given number of microseconds between issuing I/Os.

       thinktime_spin=int
	      Pretend  to  spend  CPU  time  for given number of microseconds,
	      sleeping the rest of the	time  specified	 by  thinktime.	  Only
	      valid if thinktime is set.

       thinktime_blocks=int
	      Only  valid  if  thinktime  is  set - control how many blocks to
	      issue,  before  waiting  thinktime  microseconds.	 If  not  set,
	      defaults	to  1  which will make fio wait thinktime microseconds
	      after every block. This effectively makes any queue  depth  set‐
	      ting redundant, since no more than 1 IO will be queued before we
	      have to complete it and do our thinktime. In other  words,  this
	      setting  effectively  caps  the  queue  depth  if	 the latter is
	      larger.  Default: 1.

       rate=int
	      Cap bandwidth used by this job. The number is in bytes/sec,  the
	      normal postfix rules apply. You can use rate=500k to limit reads
	      and writes to 500k each, or you can specify read and writes sep‐
	      arately.	Using  rate=1m,500k  would  limit reads to 1MB/sec and
	      writes to 500KB/sec. Capping only reads or writes	 can  be  done
	      with rate=,500k or rate=500k,. The former will only limit writes
	      (to 500KB/sec), the latter will only limit reads.

       rate_min=int
	      Tell fio to do whatever it can to maintain at  least  the	 given
	      bandwidth.   Failing to meet this requirement will cause the job
	      to exit. The same format as rate is used for read vs write sepa‐
	      ration.

       rate_iops=int
	      Cap  the bandwidth to this number of IOPS. Basically the same as
	      rate, just specified independently of bandwidth. The same format
	      as  rate is used for read vs write separation. If blocksize is a
	      range, the smallest block size is used as the metric.

       rate_iops_min=int
	      If this rate of I/O is not met, the job will exit. The same for‐
	      mat as rate is used for read vs write separation.

       rate_process=str
	      This  option  controls how fio manages rated IO submissions. The
	      default is linear, which submits IO in  a	 linear	 fashion  with
	      fixed  delays between IOs that gets adjusted based on IO comple‐
	      tion rates. If this is set to poisson, fio will submit IO	 based
	      on  a  more real world random request flow, known as the Poisson
	      process	(https://en.wikipedia.org/wiki/Poisson_process).   The
	      lambda will be 10^6 / IOPS for the given workload.

       rate_cycle=int
	      Average bandwidth for rate and rate_min over this number of mil‐
	      liseconds.  Default: 1000ms.

       latency_target=int
	      If set, fio will attempt to find the max performance point  that
	      the given workload will run at while maintaining a latency below
	      this  target.  The  values  is  given   in   microseconds.   See
	      latency_window and latency_percentile.

       latency_window=int
	      Used  with  latency_target to specify the sample window that the
	      job is run at varying queue depths to test the performance.  The
	      value is given in microseconds.

       latency_percentile=float
	      The  percentage of IOs that must fall within the criteria speci‐
	      fied by latency_target and  latency_window.  If  not  set,  this
	      defaults	to  100.0, meaning that all IOs must be equal or below
	      to the value set by latency_target.

       max_latency=int
	      If set, fio will	exit  the  job	if  it	exceeds	 this  maximum
	      latency. It will exit with an ETIME error.

       cpumask=int
	      Set  CPU affinity for this job. int is a bitmask of allowed CPUs
	      the job may run on.  See sched_setaffinity(2).

       cpus_allowed=str
	      Same as cpumask, but allows a comma-delimited list of  CPU  num‐
	      bers.

       cpus_allowed_policy=str
	      Set  the	policy	of  how	 fio distributes the CPUs specified by
	      cpus_allowed or cpumask. Two policies are supported:

		     shared All jobs will share the CPU set specified.

		     split  Each job will get a unique CPU from the CPU set.

	      shared is the default behaviour, if the option isn't  specified.
	      If  split is specified, then fio will assign one cpu per job. If
	      not enough CPUs are given for the jobs  listed,  then  fio  will
	      roundrobin the CPUs in the set.

       numa_cpu_nodes=str
	      Set  this	 job  running on specified NUMA nodes' CPUs. The argu‐
	      ments allow comma delimited list of cpu numbers, A-B ranges,  or
	      'all'.

       numa_mem_policy=str
	      Set  this job's memory policy and corresponding NUMA nodes. For‐
	      mat of the arguments:

	      <mode>[:<nodelist>]

	      mode   is one of the following memory policy:

	      default, prefer, bind, interleave, local

	      For default and local memory policy, no nodelist is
	      needed to be specified. For prefer, only one  node  is  allowed.
	      For bind and interleave, nodelist allows comma delimited list of
	      numbers, A-B ranges, or 'all'.

       startdelay=irange
	      Delay start of job for the specified number of seconds. Supports
	      all time suffixes to allow specification of hours, minutes, sec‐
	      onds and milliseconds - seconds are the default  if  a  unit  is
	      omitted.	 Can  be  given as a range which causes each thread to
	      choose randomly out of the range.

       runtime=int
	      Terminate processing after the specified number of seconds.

       time_based
	      If given, run for the specified runtime  duration	 even  if  the
	      files  are completely read or written. The same workload will be
	      repeated as many times as runtime allows.

       ramp_time=int
	      If set, fio will run the specified workload for this  amount  of
	      time  before logging any performance numbers. Useful for letting
	      performance settle before logging results, thus  minimizing  the
	      runtime  required for stable results. Note that the ramp_time is
	      considered lead in time for a job, thus  it  will	 increase  the
	      total runtime if a special timeout or runtime is specified.

       invalidate=bool
	      Invalidate  buffer-cache	for  the  file	prior to starting I/O.
	      Default: true.

       sync=bool
	      Use synchronous I/O for buffered writes.	For  the  majority  of
	      I/O engines, this means using O_SYNC.  Default: false.

       iomem=str, mem=str
	      Allocation method for I/O unit buffer.  Allowed values are:

		     malloc Allocate memory with malloc(3).

		     shm    Use	  shared   memory  buffers  allocated  through
			    shmget(2).

		     shmhuge
			    Same as shm, but use huge pages as backing.

		     mmap   Use mmap(2) for allocation.	 Uses anonymous memory
			    unless a filename is given after the option in the
			    format `:file'.

		     mmaphuge
			    Same as mmap, but use huge files as backing.

		     mmapshared
			    Same as mmap, but use a MMAP_SHARED mapping.

	      The amount of memory allocated is the maximum allowed  blocksize
	      for  the	job multiplied by iodepth.  For shmhuge or mmaphuge to
	      work, the system must have free huge pages allocated.   mmaphuge
	      also needs to have hugetlbfs mounted, and file must point there.
	      At least on Linux, huge pages must be  manually  allocated.  See
	      /proc/sys/vm/nr_hugehages	 and  the documentation for that. Nor‐
	      mally you just need to echo an appropriate number, eg echoing  8
	      will ensure that the OS has 8 huge pages ready for use.

       iomem_align=int, mem_align=int
	      This  indicates  the  memory alignment of the IO memory buffers.
	      Note that the given alignment is applied to the  first  IO  unit
	      buffer,  if using iodepth the alignment of the following buffers
	      are given by the bs used. In other words, if using a bs that  is
	      a	 multiple of the page sized in the system, all buffers will be
	      aligned to this value. If using a bs that is not	page  aligned,
	      the  alignment of subsequent IO memory buffers is the sum of the
	      iomem_align and bs used.

       hugepage-size=int
	      Defines the size of a huge page.	Must be at least equal to  the
	      system setting.  Should be a multiple of 1MB. Default: 4MB.

       exitall
	      Terminate	 all  jobs  when one finishes.	Default: wait for each
	      job to finish.

       exitall_on_error =bool
	      Terminate all jobs if one job finishes in error.	Default:  wait
	      for each job to finish.

       bwavgtime=int
	      Average  bandwidth calculations over the given time in millisec‐
	      onds.  Default: 500ms.

       iopsavgtime=int
	      Average IOPS calculations over the given time  in	 milliseconds.
	      Default: 500ms.

       create_serialize=bool
	      If true, serialize file creation for the jobs.  Default: true.

       create_fsync=bool
	      fsync(2) data file after creation.  Default: true.

       create_on_open=bool
	      If  true, the files are not created until they are opened for IO
	      by the job.

       create_only=bool
	      If true, fio will only run the setup phase of the job. If	 files
	      need  to be laid out or updated on disk, only that will be done.
	      The actual job contents are not executed.

       allow_file_create=bool
	      If true, fio is permitted to create files as part of  its	 work‐
	      load.  This  is  the  default behavior. If this option is false,
	      then fio will error out if the  files  it	 needs	to  use	 don't
	      already exist. Default: true.

       allow_mounted_write=bool
	      If  this isn't set, fio will abort jobs that are destructive (eg
	      that write) to what appears to be a mounted device or partition.
	      This should help catch creating inadvertently destructive tests,
	      not realizing that the test will destroy	data  on  the  mounted
	      file system. Default: false.

       pre_read=bool
	      If  this	is  given,  files  will be pre-read into memory before
	      starting the given  IO  operation.  This	will  also  clear  the
	      invalidate flag, since it is pointless to pre-read and then drop
	      the cache. This will only work for IO engines that are seekable,
	      since  they allow you to read the same data multiple times. Thus
	      it will not work on eg network or splice IO.

       unlink=bool
	      Unlink job files when done.  Default: false.

       loops=int
	      Specifies the number of iterations (runs of the  same  workload)
	      of this job.  Default: 1.

       verify_only=bool
	      Do  not  perform	the specified workload, only verify data still
	      matches previous invocation of this workload. This option allows
	      one  to  check data multiple times at a later date without over‐
	      writing it. This option makes  sense  only  for  workloads  that
	      write  data,  and does not support workloads with the time_based
	      option set.

       do_verify=bool
	      Run the verify phase after a write phase.	 Only valid if	verify
	      is set.  Default: true.

       verify=str
	      Method  of  verifying  file contents after each iteration of the
	      job. Each verification method also implies verification of  spe‐
	      cial  header,  which  is written to the beginning of each block.
	      This header also includes meta information, like offset  of  the
	      block,  block  number,  timestamp	 when  block was written, etc.
	      verify=str can be combined with verify_pattern=str option.   The
	      allowed values are:

		     md5  crc16	 crc32	crc32c	crc32c-intel crc64 crc7 sha256
		     sha512 sha1 xxhash
			    Store appropriate checksum in the header  of  each
			    block. crc32c-intel is hardware accelerated SSE4.2
			    driven, falls back to regular crc32c if  not  sup‐
			    ported by the system.

		     meta   This option is deprecated, since now meta informa‐
			    tion is included in	 generic  verification	header
			    and	 meta  verification  happens  by default.  For
			    detailed information see the  description  of  the
			    verify=str setting. This option is kept because of
			    compatibility's sake with old  configurations.  Do
			    not use it.

		     pattern
			    Verify  a  strict pattern. Normally fio includes a
			    header with some basic information	and  checksum‐
			    ming, but if this option is set, only the specific
			    pattern set with verify_pattern is verified.

		     null   Pretend to verify.	Used for testing internals.

	      This option can be used for repeated burn-in tests of  a	system
	      to  make sure that the written data is also correctly read back.
	      If the data direction given is a read or random read,  fio  will
	      assume  that  it should verify a previously written file. If the
	      data direction includes any form of write, the verify will be of
	      the newly written data.

       verifysort=bool
	      If  true, written verify blocks are sorted if fio deems it to be
	      faster to read them back in a sorted manner.  Default: true.

       verifysort_nr=int
	      Pre-load and sort verify blocks for a read workload.

       verify_offset=int
	      Swap the verification header with data  somewhere	 else  in  the
	      block before writing.  It is swapped back before verifying.

       verify_interval=int
	      Write  the  verification	header for this number of bytes, which
	      should divide blocksize.	Default: blocksize.

       verify_pattern=str
	      If set, fio will fill the io  buffers  with  this	 pattern.  Fio
	      defaults	to  filling  with  totally random bytes, but sometimes
	      it's interesting to fill with a known pattern for	 io  verifica‐
	      tion  purposes.  Depending on the width of the pattern, fio will
	      fill 1/2/3/4 bytes of the buffer at the time(it can be either  a
	      decimal  or  a  hex number). The verify_pattern if larger than a
	      32-bit quantity has to be a hex number that starts  with	either
	      "0x" or "0X". Use with verify=str. Also, verify_pattern supports
	      %o format, which means that for each block offset will be	 writ‐
	      ten and then verifyied back, e.g.:
		     verify_pattern=%o
	      Or use combination of everything:

		     verify_pattern=0xff%o"abcd"-21

       verify_fatal=bool
	      If  true,	 exit the job on the first observed verification fail‐
	      ure.  Default: false.

       verify_dump=bool
	      If set, dump the contents of both the original  data  block  and
	      the  data	 block	we  read  off disk to files. This allows later
	      analysis to inspect just what kind of data corruption  occurred.
	      Off by default.

       verify_async=int
	      Fio  will	 normally verify IO inline from the submitting thread.
	      This option takes an integer describing how many	async  offload
	      threads  to  create  for IO verification instead, causing fio to
	      offload the duty of verifying IO contents to one or  more	 sepa‐
	      rate  threads.   If  using  this	offload	 option,  even sync IO
	      engines can benefit from using an iodepth setting higher than 1,
	      as  it  allows them to have IO in flight while verifies are run‐
	      ning.

       verify_async_cpus=str
	      Tell fio to set the given CPU affinity on the async IO verifica‐
	      tion threads.  See cpus_allowed for the format used.

       verify_backlog=int
	      Fio will normally verify the written contents of a job that uti‐
	      lizes verify once that job has completed. In other words, every‐
	      thing  is written then everything is read back and verified. You
	      may want to verify continually instead for a variety of reasons.
	      Fio  stores the meta data associated with an IO block in memory,
	      so for large verify workloads, quite a bit of  memory  would  be
	      used  up	holding this meta data. If this option is enabled, fio
	      will write only N blocks before verifying these blocks.

       verify_backlog_batch=int
	      Control how many blocks fio will	verify	if  verify_backlog  is
	      set.  If	not  set,  will default to the value of verify_backlog
	      (meaning the entire queue is read back and verified).   If  ver‐
	      ify_backlog_batch	 is  less  than	 verify_backlog	 then  not all
	      blocks will be verified,	if verify_backlog_batch is larger than
	      verify_backlog,  some blocks will be verified more than once.

       trim_percentage=int
	      Number of verify blocks to discard/trim.

       trim_verify_zero=bool
	      Verify that trim/discarded blocks are returned as zeroes.

       trim_backlog=int
	      Trim after this number of blocks are written.

       trim_backlog_batch=int
	      Trim this number of IO blocks.

       experimental_verify=bool
	      Enable experimental verification.

       verify_state_save=bool
	      When  a  job  exits during the write phase of a verify workload,
	      save its current state. This allows fio to replay up until  that
	      point, if the verify state is loaded for the verify read phase.

       verify_state_load=bool
	      If a verify termination trigger was used, fio stores the current
	      write state of each thread. This can  be	used  at  verification
	      time  so	that  fio knows how far it should verify. Without this
	      information, fio will run a full verification pass, according to
	      the settings in the job file used.

       stonewall , wait_for_previous
	      Wait  for preceding jobs in the job file to exit before starting
	      this one.	 stonewall implies new_group.

       new_group
	      Start a new reporting group.  If not given, all jobs in  a  file
	      will  be part of the same reporting group, unless separated by a
	      stonewall.

       numjobs=int
	      Number of clones (processes/threads performing  the  same	 work‐
	      load) of this job.  Default: 1.

       group_reporting
	      If  set,	display	 per-group  reports  instead  of  per-job when
	      numjobs is specified.

       thread Use threads created with pthread_create(3) instead of  processes
	      created with fork(2).

       zonesize=int
	      Divide  file  into  zones	 of  the specified size in bytes.  See
	      zoneskip.

       zonerange=int
	      Give size of an IO zone.	See zoneskip.

       zoneskip=int
	      Skip the specified number of bytes when zonesize bytes  of  data
	      have been read.

       write_iolog=str
	      Write  the issued I/O patterns to the specified file.  Specify a
	      separate file for each job, otherwise the iologs will be	inter‐
	      spersed and the file may be corrupt.

       read_iolog=str
	      Replay  the  I/O patterns contained in the specified file gener‐
	      ated by write_iolog, or may be a blktrace binary file.

       replay_no_stall=int
	      While replaying I/O patterns using read_iolog the default behav‐
	      ior   attempts  to  respect  timing  information	between	 I/Os.
	      Enabling replay_no_stall causes I/Os to be replayed as  fast  as
	      possible while still respecting ordering.

       replay_redirect=str
	      While replaying I/O patterns using read_iolog the default behav‐
	      ior is to replay the IOPS onto the major/minor device that  each
	      IOP  was recorded from.  Setting replay_redirect causes all IOPS
	      to be replayed onto the single specified	device	regardless  of
	      the device it was recorded from.

       replay_align=int
	      Force  alignment	of  IO	offsets and lengths in a trace to this
	      power of 2 value.

       replay_scale=int
	      Scale sector offsets down by this factor when replaying traces.

       per_job_logs=bool
	      If set, this generates bw/clat/iops log with  per	 file  private
	      filenames.  If not set, jobs with identical names will share the
	      log filename. Default: true.

       write_bw_log=str
	      If given, write a bandwidth log of the jobs in  this  job	 file.
	      Can  be used to store data of the bandwidth of the jobs in their
	      lifetime. The included fio_generate_plots script uses gnuplot to
	      turn  these  text	 files into nice graphs. See write_lat_log for
	      behaviour of given filename. For this  option,  the  postfix  is
	      _bw.x.log, where x is the index of the job (1..N, where N is the
	      number of jobs). If per_job_logs is  false,  then	 the  filename
	      will not include the job index.

       write_lat_log=str
	      Same  as	write_bw_log, but writes I/O completion latencies.  If
	      no filename is given with this option, the default  filename  of
	      "jobname_type.x.log"  is	used,  where x is the index of the job
	      (1..N, where N is the number of jobs). Even if the  filename  is
	      given, fio will still append the type of log. If per_job_logs is
	      false, then the filename will not include the job index.

       write_iops_log=str
	      Same as write_bw_log, but writes IOPS. If no filename  is	 given
	      with  this  option, the default filename of "jobname_type.x.log"
	      is used, where x is the index of the job (1..N, where N  is  the
	      number  of  jobs). Even if the filename is given, fio will still
	      append the type of log. If per_job_logs is false, then the file‐
	      name will not include the job index.

       log_avg_msec=int
	      By  default,  fio	 will log an entry in the iops, latency, or bw
	      log for every IO that completes. When writing to the  disk  log,
	      that  can quickly grow to a very large size. Setting this option
	      makes fio average the each log entry over the  specified	period
	      of time, reducing the resolution of the log.  Defaults to 0.

       log_offset=bool
	      If  this	is set, the iolog options will include the byte offset
	      for the IO entry as well as the other data values.

       log_compression=int
	      If this is set, fio will compress the IO logs  as	 it  goes,  to
	      keep  the	 memory footprint lower. When a log reaches the speci‐
	      fied size, that chunk is removed and  compressed	in  the	 back‐
	      ground.  Given that IO logs are fairly highly compressible, this
	      yields a nice memory savings for longer runs.  The  downside  is
	      that the compression will consume some background CPU cycles, so
	      it may impact the run. This, however, is also true if  the  log‐
	      ging  ends  up consuming most of the system memory. So pick your
	      poison. The IO logs are saved normally at the end of a  run,  by
	      decompressing  the  chunks and storing them in the specified log
	      file. This feature depends on the availability of zlib.

       log_compression_cpus=str
	      Define the set of CPUs that are allowed  to  handle  online  log
	      compression  for	the IO jobs. This can provide better isolation
	      between performance sensitive jobs, and  background  compression
	      work.

       log_store_compressed=bool
	      If  set,	fio  will  store the log files in a compressed format.
	      They can be decompressed with fio, using the --inflate-log  com‐
	      mand  line  parameter.  The files will be stored with a .fz suf‐
	      fix.

       block_error_percentiles=bool
	      If set, record errors in trim block-sized units from writes  and
	      trims and output a histogram of how many trims it took to get to
	      errors, and what kind of error was encountered.

       disable_lat=bool
	      Disable measurements of total latency numbers. Useful  only  for
	      cutting  back  the  number  of calls to gettimeofday(2), as that
	      does impact performance at really high IOPS rates.  Note that to
	      really  get  rid	of  a large amount of these calls, this option
	      must be used with disable_slat and disable_bw as well.

       disable_clat=bool
	      Disable measurements of completion  latency  numbers.  See  dis‐
	      able_lat.

       disable_slat=bool
	      Disable  measurements  of	 submission  latency numbers. See dis‐
	      able_lat.

       disable_bw_measurement=bool
	      Disable measurements of throughput/bandwidth numbers.  See  dis‐
	      able_lat.

       lockmem=int
	      Pin  the	specified amount of memory with mlock(2).  Can be used
	      to simulate a smaller amount of memory. The amount specified  is
	      per worker.

       exec_prerun=str
	      Before  running the job, execute the specified command with sys‐
	      tem(3).
	      Output is redirected in a file called jobname.prerun.txt

       exec_postrun=str
	      Same as exec_prerun, but the command is executed after  the  job
	      completes.
	      Output is redirected in a file called jobname.postrun.txt

       ioscheduler=str
	      Attempt  to  switch the device hosting the file to the specified
	      I/O scheduler.

       disk_util=bool
	      Generate disk utilization statistics if  the  platform  supports
	      it. Default: true.

       clocksource=str
	      Use  the	given clocksource as the base of timing. The supported
	      options are:

	      gettimeofday
		     gettimeofday(2)

	      clock_gettime
		     clock_gettime(2)

	      cpu    Internal CPU clock source

	      cpu is the preferred clocksource if it is reliable, as it
	      is very fast
	      (and  fio	 is  heavy  on time calls). Fio will automatically use
	      this clocksource if it's supported and  considered  reliable  on
	      the  system  it  is  running  on,	 unless another clocksource is
	      specifically set. For x86/x86-64 CPUs, this means supporting TSC
	      Invariant.

       gtod_reduce=bool
	      Enable   all  of	the  gettimeofday(2)  reducing	options	 (dis‐
	      able_clat, disable_slat, disable_bw) plus	 reduce	 precision  of
	      the  timeout  somewhat to really shrink the gettimeofday(2) call
	      count. With this option enabled, we only do about	 0.4%  of  the
	      gtod() calls we would have done if all time keeping was enabled.

       gtod_cpu=int
	      Sometimes	 it's cheaper to dedicate a single thread of execution
	      to just getting  the  current  time.  Fio	 (and  databases,  for
	      instance) are very intensive on gettimeofday(2) calls. With this
	      option, you can set one CPU aside for doing nothing but  logging
	      current  time  to	 a  shared  memory  location.  Then  the other
	      threads/processes that run IO workloads need only copy that seg‐
	      ment,  instead  of  entering  the	 kernel with a gettimeofday(2)
	      call. The CPU set aside for  doing  these	 time  calls  will  be
	      excluded	from  other  uses. Fio will manually clear it from the
	      CPU mask of other jobs.

       ignore_error=str
	      Sometimes you want to ignore some errors	during	test  in  that
	      case you can specify error list for each error type.
	      ignore_error=READ_ERR_LIST,WRITE_ERR_LIST,VERIFY_ERR_LIST
	      errors for given error type is separated with ':'.  Error may be
	      symbol ('ENOSPC', 'ENOMEM') or an integer.
	      Example: ignore_error=EAGAIN,ENOSPC:122 .
	      This option  will	 ignore	 EAGAIN	 from  READ,  and  ENOSPC  and
	      122(EDQUOT) from WRITE.

       error_dump=bool
	      If  set  dump  every  error  even	 if  it	 is non fatal, true by
	      default. If disabled only fatal error will be dumped

       profile=str
	      Select a specific builtin performance test.

       cgroup=str
	      Add job to this control group. If it doesn't exist, it  will  be
	      created.	 The  system  must  have  a mounted cgroup blkio mount
	      point for this to work. If your system doesn't have it  mounted,
	      you can do so with:

	      # mount -t cgroup -o blkio none /cgroup

       cgroup_weight=int
	      Set  the	weight of the cgroup to this value. See the documenta‐
	      tion that comes with the kernel, allowed values are in the range
	      of 100..1000.

       cgroup_nodelete=bool
	      Normally	fio  will  delete the cgroups it has created after the
	      job completion.  To override this behavior and to leave  cgroups
	      around after the job completion, set cgroup_nodelete=1. This can
	      be useful if one wants to inspect various cgroup files after job
	      completion. Default: false

       uid=int
	      Instead of running as the invoking user, set the user ID to this
	      value before the thread/process does any work.

       gid=int
	      Set group ID, see uid.

       unit_base=int
	      Base unit for reporting.	Allowed values are:

	      0	     Use auto-detection (default).

	      8	     Byte based.

	      1	     Bit based.

       flow_id=int
	      The ID of the flow. If not specified, it	defaults  to  being  a
	      global flow. See flow.

       flow=int
	      Weight  in token-based flow control. If this value is used, then
	      there is a flow counter which is used to regulate the proportion
	      of  activity between two or more jobs. fio attempts to keep this
	      flow counter near zero. The flow parameter stands for  how  much
	      should be added or subtracted to the flow counter on each itera‐
	      tion of the main I/O loop. That is, if one job  has  flow=8  and
	      another  job has flow=-1, then there will be a roughly 1:8 ratio
	      in how much one runs vs the other.

       flow_watermark=int
	      The maximum value that the absolute value of the flow counter is
	      allowed  to  reach before the job must wait for a lower value of
	      the counter.

       flow_sleep=int
	      The period of time, in microseconds,  to	wait  after  the  flow
	      watermark has been exceeded before retrying operations

       clat_percentiles=bool
	      Enable the reporting of percentiles of completion latencies.

       percentile_list=float_list
	      Overwrite	 the default list of percentiles for completion laten‐
	      cies and the block error histogram. Each number  is  a  floating
	      number  in the range (0,100], and the maximum length of the list
	      is 20. Use ':' to separate  the  numbers.	 For  example,	--per‐
	      centile_list=99.5:99.9  will  cause  fio to report the values of
	      completion latency below which 99.5% and 99.9% of	 the  observed
	      latencies fell, respectively.

   Ioengine Parameters List
       Some  parameters	 are  only  valid  when a specific ioengine is in use.
       These are used identically to normal parameters, with the  caveat  that
       when used on the command line, they must come after the ioengine.

       (cpu)cpuload=int
	      Attempt to use the specified percentage of CPU cycles.

       (cpu)cpuchunks=int
	      Split the load into cycles of the given time. In microseconds.

       (cpu)exit_on_io_done=bool
	      Detect when IO threads are done, then exit.

       (libaio)userspace_reap
	      Normally,	 with  the  libaio  engine  in	use,  fio will use the
	      io_getevents system call to reap newly  returned	events.	  With
	      this  flag  turned  on,  the AIO ring will be read directly from
	      user-space to reap events. The reaping mode is only enabled when
	      polling  for  a  minimum of 0 events (eg when iodepth_batch_com‐
	      plete=0).

       (net,netsplice)hostname=str
	      The host name or IP address to use for TCP or UDP based IO.   If
	      the  job	is  a  TCP listener or UDP reader, the hostname is not
	      used and must be omitted unless it  is  a	 valid	UDP  multicast
	      address.

       (net,netsplice)port=int
	      The  TCP	or  UDP port to bind to or connect to. If this is used
	      with numjobs to spawn multiple instances of the same  job	 type,
	      then  this will be the starting port number since fio will use a
	      range of ports.

       (net,netsplice)interface=str
	      The IP address of the network interface used to send or  receive
	      UDP multicast packets.

       (net,netsplice)ttl=int
	      Time-to-live  value for outgoing UDP multicast packets. Default:
	      1

       (net,netsplice)nodelay=bool
	      Set TCP_NODELAY on TCP connections.

       (net,netsplice)protocol=str, proto=str
	      The network protocol to use. Accepted values are:

		     tcp    Transmission control protocol

		     tcpv6  Transmission control protocol V6

		     udp    User datagram protocol

		     udpv6  User datagram protocol V6

		     unix   UNIX domain socket

	      When the protocol is TCP or UDP, the port must also be given, as
	      well as the hostname if the job is a TCP listener or UDP reader.
	      For unix sockets, the normal filename option should be used  and
	      the port is invalid.

       (net,netsplice)listen
	      For  TCP	network	 connections,  tell fio to listen for incoming
	      connections rather than initiating an outgoing  connection.  The
	      hostname must be omitted if this option is used.

       (net,pingpong)=bool
	      Normally a network writer will just continue writing data, and a
	      network reader will just consume packets. If pingpong=1 is  set,
	      a	 writer	 will send its normal payload to the reader, then wait
	      for the reader to send the same payload back.  This  allows  fio
	      to  measure  network  latencies.	The  submission and completion
	      latencies then measure local time spent  sending	or  receiving,
	      and  the	completion  latency  measures how long it took for the
	      other end to receive and send back. For  UDP  multicast  traffic
	      pingpong=1  should only be set for a single reader when multiple
	      readers are listening to the same address.

       (net,window_size)=int
	      Set the desired socket buffer size for the connection.

       (net,mss)=int
	      Set the TCP maximum segment size (TCP_MAXSEG).

       (e4defrag,donorname)=str
	      File will be used as a block donor (swap extents between files)

       (e4defrag,inplace)=int
	      Configure donor file block allocation strategy
	      0(default): Preallocate donor's file on init

	      1:     allocate space immediately inside defragment  event,  and
		     free right after event

       (rbd)rbdname=str
	      Specifies the name of the RBD.

       (rbd)pool=str
	      Specifies the name of the Ceph pool containing the RBD.

       (rbd)clientname=str
	      Specifies	 the  username	(without the 'client.' prefix) used to
	      access the Ceph cluster.

       (mtd)skipbad=bool
	      Skip operations against known bad blocks.

OUTPUT
       While running, fio will display the status of the  created  jobs.   For
       example:

	      Threads:	1:  [_r]  [24.8%  done]	 [  13509/   8334  kb/s]  [eta
	      00h:01m:31s]

       The characters in the first set of brackets denote the  current	status
       of each threads.	 The possible values are:

	      P	     Setup but not started.
	      C	     Thread created.
	      I	     Initialized, waiting.
	      R	     Running, doing sequential reads.
	      r	     Running, doing random reads.
	      W	     Running, doing sequential writes.
	      w	     Running, doing random writes.
	      M	     Running, doing mixed sequential reads/writes.
	      m	     Running, doing mixed random reads/writes.
	      F	     Running, currently waiting for fsync(2).
	      V	     Running, verifying written data.
	      E	     Exited, not reaped by main thread.
	      -	     Exited, thread reaped.

       The second set of brackets shows the estimated completion percentage of
       the current group.  The third set shows the read and  write  I/O	 rate,
       respectively. Finally, the estimated run time of the job is displayed.

       When fio completes (or is interrupted by Ctrl-C), it will show data for
       each thread, each group of threads, and each disk, in that order.

       Per-thread statistics first show the threads client  number,  group-id,
       and error code.	The remaining figures are as follows:

	      io     Number of megabytes of I/O performed.

	      bw     Average data rate (bandwidth).

	      runt   Threads run time.

	      slat   Submission latency minimum, maximum, average and standard
		     deviation. This is the time it took to submit the I/O.

	      clat   Completion latency minimum, maximum, average and standard
		     deviation.	  This is the time between submission and com‐
		     pletion.

	      bw     Bandwidth minimum, maximum, percentage of aggregate band‐
		     width received, average and standard deviation.

	      cpu    CPU usage statistics. Includes user and system time, num‐
		     ber of context switches this thread went through and num‐
		     ber of major and minor page faults.

	      IO depths
		     Distribution  of  I/O depths.  Each depth includes every‐
		     thing less than (or equal) to it, but  greater  than  the
		     previous depth.

	      IO issued
		     Number of read/write requests issued, and number of short
		     read/write requests.

	      IO latencies
		     Distribution of I/O completion  latencies.	  The  numbers
		     follow the same pattern as IO depths.

       The group statistics show:
	      io     Number of megabytes I/O performed.
	      aggrb  Aggregate bandwidth of threads in the group.
	      minb   Minimum average bandwidth a thread saw.
	      maxb   Maximum average bandwidth a thread saw.
	      mint   Shortest runtime of threads in the group.
	      maxt   Longest runtime of threads in the group.

       Finally, disk statistics are printed with reads first:
	      ios    Number of I/Os performed by all groups.
	      merge  Number of merges in the I/O scheduler.
	      ticks  Number of ticks we kept the disk busy.
	      io_queue
		     Total time spent in the disk queue.
	      util   Disk utilization.

       It  is  also possible to get fio to dump the current output while it is
       running, without terminating the job. To do that,  send	fio  the  USR1
       signal.

TERSE OUTPUT
       If  the	--minimal / --append-terse options are given, the results will
       be  printed/appended  in	 a  semicolon-delimited	 format	 suitable  for
       scripted	 use.	A job description (if provided) follows on a new line.
       Note that the first number in the line is the version  number.  If  the
       output  has  to	be changed for some reason, this number will be incre‐
       mented by 1 to signify that change.  The fields are:

	      terse version, fio version, jobname, groupid, error

	      Read status:
		     Total I/O (KB), bandwidth (KB/s), IOPS, runtime (ms)

		     Submission latency:
			    min, max, mean, standard deviation
		     Completion latency:
			    min, max, mean, standard deviation
		     Completion latency percentiles (20 fields):
			    Xth percentile=usec
		     Total latency:
			    min, max, mean, standard deviation
		     Bandwidth:
			    min, max, aggregate	 percentage  of	 total,	 mean,
			    standard deviation

	      Write status:
		     Total I/O (KB), bandwidth (KB/s), IOPS, runtime (ms)

		     Submission latency:
			    min, max, mean, standard deviation
		     Completion latency:
			    min, max, mean, standard deviation
		     Completion latency percentiles (20 fields):
			    Xth percentile=usec
		     Total latency:
			    min, max, mean, standard deviation
		     Bandwidth:
			    min,  max,	aggregate  percentage  of total, mean,
			    standard deviation

	      CPU usage:
		     user, system, context switches, major page faults,	 minor
		     page faults

	      IO depth distribution:
		     <=1, 2, 4, 8, 16, 32, >=64

	      IO latency distribution:
		     Microseconds:
			    <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000
		     Milliseconds:
			    <=2,  4,  10,  20,	50,  100, 250, 500, 750, 1000,
			    2000, >=2000

	      Disk utilization (1 for each disk used):
		     name, read ios, write ios,	 read  merges,	write  merges,
		     read  ticks,  write  ticks, read in-queue time, write in-
		     queue time, disk utilization percentage

	      Error Info (dependent on continue_on_error, default off):
		     total # errors, first error code

	      text description (if provided in config - appears on newline)

CLIENT / SERVER
       Normally you would run fio as a stand-alone application on the  machine
       where the IO workload should be generated. However, it is also possible
       to run the frontend and backend of fio separately. This makes it possi‐
       ble  to	have a fio server running on the machine(s) where the IO work‐
       load should be running, while controlling it from another machine.

       To start the server, you would do:

       fio --server=args

       on that machine, where args defines what fio listens to. The  arguments
       are  of	the form 'type:hostname or IP:port'. 'type' is either 'ip' (or
       ip4) for TCP/IP v4, 'ip6' for TCP/IP v6, or 'sock'  for	a  local  unix
       domain  socket.	'hostname'  is	either	a  hostname or IP address, and
       'port' is the port to listen to (only valid for	TCP/IP,	 not  a	 local
       socket). Some examples:

       1) fio --server

	  Start	 a fio server, listening on all interfaces on the default port
       (8765).

       2) fio --server=ip:hostname,4444

	  Start a fio server, listening on IP belonging	 to  hostname  and  on
       port 4444.

       3) fio --server=ip6:::1,4444

	  Start	 a  fio	 server,  listening  on IPv6 localhost ::1 and on port
       4444.

       4) fio --server=,4444

	  Start a fio server, listening on all interfaces on port 4444.

       5) fio --server=1.2.3.4

	  Start a fio server, listening on IP 1.2.3.4 on the default port.

       6) fio --server=sock:/tmp/fio.sock

	  Start a fio server, listening on the local socket /tmp/fio.sock.

       When a server is running, you can connect to  it	 from  a  client.  The
       client is run with:

       fio --local-args --client=server --remote-args <job file(s)>

       where  --local-args are arguments that are local to the client where it
       is running, 'server' is the connect string, and --remote-args and  <job
       file(s)>	 are  sent to the server. The 'server' string follows the same
       format as it does on the server side, to allow  IP/hostname/socket  and
       port  strings.  You can connect to multiple clients as well, to do that
       you could run:

       fio --client=server2 --client=server2 <job file(s)>

       If the job file is located on the fio server, then  you	can  tell  the
       server  to  load	 a local file as well. This is done by using --remote-
       config:

       fio --client=server --remote-config /path/to/file.fio

       Then fio will open this local (to the server) job file instead of being
       passed one from the client.

       If you have many servers (example: 100 VMs/containers), you can input a
       pathname of a file containing host IPs/names as the parameter value for
       the  --client option.  For example, here is an example "host.list" file
       containing 2 hostnames:

       host1.your.dns.domain
       host2.your.dns.domain

       The fio command would then be:

       fio --client=host.list <job file>

       In this mode, you cannot input server-specific parameters or job files,
       and all servers receive the same job file.

       In order to enable fio --client runs utilizing a shared filesystem from
       multiple hosts, fio --client now prepends the IP address of the	server
       to  the	filename.  For example, if fio is using directory /mnt/nfs/fio
       and is writing filename fileio.tmp, with a --client hostfile containing
       two   hostnames	 h1  and  h2  with  IP	addresses  192.168.10.120  and
       192.168.10.121, then fio will create two files:

       /mnt/nfs/fio/192.168.10.120.fileio.tmp
       /mnt/nfs/fio/192.168.10.121.fileio.tmp

AUTHORS
       fio was written by Jens Axboe <jens.axboe@oracle.com>, now  Jens	 Axboe
       <axboe@fb.com>.
       This  man  page	was  written by Aaron Carroll <aaronc@cse.unsw.edu.au>
       based on documentation by Jens Axboe.

REPORTING BUGS
       Report bugs to the fio mailing list <fio@vger.kernel.org>.  See README.

SEE ALSO
       For further documentation see HOWTO and README.
       Sample jobfiles are available in the examples directory.

User Manual			 December 2014				fio(1)
[top]

List of man pages available for DragonFly

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net