Test::Assertions man page on Fedora

Man page or keyword search:  
man Server   31170 pages
apropos Keyword Search (all sections)
Output format
Fedora logo
[printable version]

Test::Assertions(3)   User Contributed Perl Documentation  Test::Assertions(3)

NAME
       Test::Assertions - a simple set of building blocks for both unit and
       runtime testing

SYNOPSIS
	       #ASSERT does nothing
	       use Test::Assertions;

	       #ASSERT warns "Assertion failure"...
	       use Test::Assertions qw(warn);

	       #ASSERT dies with "Assertion failure"...
	       use Test::Assertions qw(die);

	       #ASSERT warns "Assertion failure"... with stack trace
	       use Test::Assertions qw(cluck);

	       #ASSERT dies with "Assertion failure"... with stack trace
	       use Test::Assertions qw(confess);

	       #ASSERT prints ok/not ok
	       use Test::Assertions qw(test);

	       #Will cause an assertion failure
	       ASSERT(1 == 0);

	       #Optional message
	       ASSERT(0 == 1, "daft");

	       #Checks if coderef dies
	       ASSERT(
		       DIED( sub {die()} )
	       );

	       #Check if perl compiles OK
	       ASSERT(
		       COMPILES('program.pl')
	       );

	       #Deep comparisons
	       ASSERT(
		       EQUAL(\@a, \@b),
		       "lists of widgets match"	       # an optional message
	       );
	       ASSERT(
		       EQUAL(\%a, \%b)
	       );

	       #Compare to a canned value
	       ASSERT(
		       EQUALS_FILE($foo, 'bar.dat'),
		       "value matched stored value"
	       );

	       #Compare to a canned value (regex match using file contents as regex)
	       ASSERT(
		       MATCHES_FILE($foo, 'bar.regex')
	       );

	       #Compare file contents
	       ASSERT(
		       FILES_EQUAL('foo.dat', 'bar.dat')
	       );

	       #returns 'not ok for Foo::Bar Tests (1 errors in 3 tests)'
	       ASSESS(
			['ok 1', 'not ok 2', 'A comment', 'ok 3'], 'Foo::Bar Tests', 0
	       );

	       #Collate results from another test script
	       ASSESS_FILE("test.pl");

	       #File routines
	       $success = WRITE_FILE('bar.dat', 'hello world');
	       ASSERT( WRITE_FILE('bar.dat', 'hello world'), 'file was written');
	       $string = READ_FILE('example.out');
	       ASSERT( READ_FILE('example.out'), 'file has content' );

       The helper routines don't need to be used inside ASSERT():

	       if ( EQUALS_FILE($string, $filename) ) {
		       print "File hasn't changed - skipping\n";
	       } else {
		       my $rc = run_complex_process($string);
		       print "File changed - string was reprocessed with result '$rc'\n";
	       }

	       ($boolean, $output) = COMPILES('file.pl');
	       # or...
	       my $string;
	       ($boolean, $standard_output) = COMPILES('file.pl', 1, \$string);
	       # $string now contains standard error, separate from $standard_output

       In test mode:

	       use Test::Assertions qw(test);
	       plan tests => 4;
	       plan tests;				       #will attempt to deduce the number
	       only (1,2);				       #Only report ok/not ok for these tests
	       ignore 2;				       #Skip this test

	       #In test/ok mode...
	       use Test::Assertions qw(test/ok);
	       ok(1);					       #synonym for ASSERT

DESCRIPTION
       Test::Assertions provides a convenient set of tools for constructing
       tests, such as unit tests or run-time assertion checks (like C's ASSERT
       macro).	Unlike some of the Test:: modules available on CPAN,
       Test::Assertions is not limited to unit test scripts; for example it
       can be used to check output is as expected within a benchmarking
       script.	When it is used for unit tests, it generates output in the
       standard form for CPAN unit testing (under Test::Harness).

       The package's import method is used to control the behaviour of ASSERT:
       whether it dies, warns, prints 'ok'/'not ok', or does nothing.

       In 'test' mode the script also exports plan(), only() and ignore()
       functions.  In 'test/ok' mode an ok() function is also exported for
       compatibility with Test/Test::Harness.  The plan function attempts to
       count the number of tests if it isn't told a number (this works fine in
       simple test scripts but not in loops/subroutines). In either mode, a
       warning will be emitted if the planned number of tests is not the same
       as the number of tests actually run, e.g.

	       # Looks like you planned 2 tests but actually ran 1.

   METHODS
       plan $number_of_tests
	   Specify the number of tests to expect. If $number_of_tests isn't
	   supplied, ASSERTION tries to deduce the number itself by parsing
	   the calling script and counting the number of calls to ASSERT.  It
	   also returns the number of tests, should you wish to make use of
	   that figure at some point.  In 'test' and 'test/ok' mode a warning
	   will be emitted if the actual number of tests does not match the
	   number planned, similar to Test::More.

       only(@test_numbers)
	   Only display the results of these tests

       ignore(@test_numbers)
	   Don't display the results of these tests

       ASSERT($bool, $comment)
	   The workhorse function.  Behaviour depends on how the module was
	   imported.  $comment is optional.

       ASSESS(@result_strings)
	   Collate the results from a set of tests.  In a scalar context
	   returns a result string starting with "ok" or "not ok"; in a list
	   context returns 1=pass or 0=fail, followed by a description.

	    ($bool, $desc) = ASSESS(@args)

	   is equivalent to

	    ($bool, $desc) = INTERPRET(scalar ASSESS(@args))

       ASSESS_FILE($file, $verbose, $timeout)
	    $verbose is an optional boolean
	    default timeout is 60 seconds (0=never timeout)

	   In a scalar context returns a result string; in a list context
	   returns 1=pass or 0=fail, followed by a description.	 The timeout
	   uses alarm(), but has no effect on platforms which do not implement
	   alarm().

       ($bool, $desc) = INTERPRET($result_string)
	   Inteprets a result string.  $bool indicates 1=pass/0=fail; $desc is
	   an optional description.

       $bool = EQUAL($item1, $item2)
	   Deep comparison of 2 data structures (i.e. references to some kind
	   of structure) or scalars.

       $bool = EQUALS_FILE($string, $filename)
	   Compares a string with a canned value in a file.

       $bool = MATCHES_FILE($string, $regexfilename)
	   Compares a value with a regex that is read from a file. The regex
	   has the '^' anchor prepended and the '$' anchor appended, after
	   being read in from the file.	 Handy if you have random numbers or
	   dates in your output.

       $bool = FILES_EQUAL($filename1, $filename2)
	   Test if 2 files' contents are identical

       $bool = DIED($coderef)
	   Test if the coderef died

       COMPILES($filename, $strict, $scalar_reference)
	   Test if the perl code in $filename compiles OK, like perl -c.  If
	   $strict is true, tests with the options -Mstrict -w.

	   In scalar context it returns 1 if the code compiled, 0 otherwise.
	   In list context it returns the same boolean, followed by the output
	   (that is, standard output and standard error combined) of the
	   syntax check.

	   If $scalar_reference is supplied and is a scalar reference then the
	   standard output and standard error of the syntax check subprocess
	   will be captured separately. Standard error will be put into this
	   scalar - IO::CaptureOutput is loaded on demand to do this - and
	   standard output will be returned as described above.

       $contents = READ_FILE($filename)
	   Reads the specified file and returns the contents.  Returns undef
	   if file cannot be read.

       $success = WRITE_FILE($filename, $contents)
	   Writes the given contents to the specified file.  Returns undef if
	   file cannot be written.

OVERHEAD
       When Test::Assertions is imported with no arguments, ASSERT is aliased
       to an empty coderef.  If this is still too much runtime overhead for
       you, you can use a constant to optimise out ASSERT statements at
       compile time.  See the section on runtime testing in
       Test::Assertions::Manual for a discussion of overheads, some examples
       and some benchmark results.

DEPENDENCIES
       The following modules are loaded on demand:

	Carp
	File::Spec
	Test::More
	File::Compare
	IO::CaptureOutput

RELATED MODULES
       Test and Test::Simple
	   Minimal unit testing modules

       Test::More
	   Richer unit testing toolkit compatible with Test and Test::Simple

       Carp::Assert
	   Runtime testing toolkit

TODO
	       - Declare ASSERT() with :assertions attribute in versions of perl >= 5.9
		 so it can be optimised away at runtime. It should be possible to declare
		 the attribute conditionally in a BEGIN block (with eval) for backwards
		 compatibility

SEE ALSO
       Test::Assertions::Manual - A guide to using Test::Assertions

VERSION
       $Revision: 1.54 $ on $Date: 2006/08/07 10:44:42 $ by $Author: simonf $

AUTHOR
       John Alden with additions from Piers Kent and Simon Flack <cpan _at_
       bbc _dot_ co _dot_ uk>

COPYRIGHT
       (c) BBC 2005. This program is free software; you can redistribute it
       and/or modify it under the GNU GPL.

       See the file COPYING in this distribution, or
       http://www.gnu.org/licenses/gpl.txt

perl v5.14.1			  2006-08-10		   Test::Assertions(3)
[top]

List of man pages available for Fedora

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net