1Test::Assertions::ManuaUls(e3r)Contributed Perl DocumentTaetsito:n:Assertions::Manual(3)
2
3
4

NAME

6       Test::Assertions::Manual - A guide to using Test::Assertions
7

DESCRIPTION

9       This is a brief guide to how you can use the Test::Assertions module in
10       your code and test scripts.  The "Test::Assertions" documentation has a
11       comprehensive list of options.
12

Unit testing

14       To use Test::Assertions for unit testing, import it with the argument
15       "test":
16
17               use Test::Assertions qw(test);
18
19       The output of Test::Assertions in test mode is suitable for collation
20       with Test::Harness.  Only the ASSERT() and plan() routines can create
21       any output - all the other routines simply return values.
22
23   Planning tests
24       Test::Assertions offers a "plan tests" syntax similar to Test::More:
25
26               plan tests => 42;
27               # Which creates the output:
28               1..42
29
30       If you find having to increment the number at the top of your test
31       script every time you add a test irritating, you can use the automatic,
32       Do What I Mean, form:
33
34               plan tests;
35
36       In this case, Test::Assertions will read your code and count the number
37       of ASSERT statements and use this for the expected number of tests.  A
38       caveat is that it expects all your ASSERT statements to be executed
39       once only, hence ASSERTs in if and foreach blocks will fool
40       Test::Assertions and you'll have to maintain the count manually in
41       these cases.  Furthermore, it uses caller() to get the filename of the
42       code so it may not work if you invoke your program with a relative
43       filename and then change working directory before calling this
44       automatic "plan tests;" form.
45
46       Test::Assertions offers a couple of additional functions - only() and
47       ignore() to control which tests will be reported.  Usage is as follows:
48
49               ignore(2, 5) if($^O eq 'MsWin32');
50               only(1..10) unless($^O eq 'MsWin32');
51
52       Note that these won't stop the actual test code from being attempted,
53       but the results won't be reported.
54
55   Testing things
56       The routines for constructing tests are deliberately ALL CAPS so you
57       can discriminate at a glance between the test and what is being tested.
58       To check something does what expected, use ASSERT:
59
60               ASSERT(1 == 1);
61
62       This gives the output:
63
64               ok 1
65
66       An optional 2nd arg may be supplied for a comment to label the test:
67
68               ASSERT(1 == 1, "an example test");
69
70       This gives the output:
71
72               ok 1 (an example test)
73
74       In the interest of brevity of documentation, I'll omit the 2nd argument
75       from my examples below.  For your real-world tests, labelling the
76       output is strongly recommended so when something fails you know what it
77       is.
78
79       If you are hopelessly addicted to invoking your tests with an ok()
80       routine, Test::Assertions has a concession for Test::Simple/More
81       junkies:
82
83               use Test::Assertions qw(test/ok);
84               plan tests => 1;
85               ok(1, "ok() works just like ASSERT()");
86
87   More complex tests with helper routines
88       Most real-world unit tests will need to check data structures returned
89       from an API.  The EQUAL() function compares two data structures deeply
90       (a bit like Test::More's eq_array or eq_hash):
91
92               ASSERT( EQUAL(\@arr, [1,2,3]) );
93               ASSERT( EQUAL(\%observed, \%expected) );
94
95       For routines that return large strings or write to files (e.g.
96       templating), you might want to have your expected output held
97       externally in a file.  Test::Assertions provides a few routines to make
98       this easy.  EQUALS_FILE compares a string to the contents of a file:
99
100               ASSERT( EQUALS_FILE($returned, "expected.txt") );
101
102       Whereas FILES_EQUAL compares the contents of 2 files:
103
104               $object_to_test->write_file("observed.txt");
105               ASSERT( FILES_EQUAL("observed.txt", "expected.txt") );
106               unlink("observed.txt"); #always clean up so state on 2nd run is same as 1st run
107
108       If your files contain serialized data structures, e.g. the output of
109       Data::Dumper, you may wish to use do(), or eval() their contents, and
110       use the EQUAL() routine to compare the structures, rather than
111       comparing the serialized forms directly.
112
113               my $var1 = do('file1.datadump');
114               my $var2 = do('file2.datadump');
115               ASSERT( EQUAL($var1, $var2), 'serialized versions matched' );
116
117       The MATCHES_FILE routine compares a string with regex that is read from
118       a file, which is most useful if your string contains dates, timestamps,
119       filepaths, or other items which might change from one run of the test
120       to the next, or across different machines:
121
122               ASSERT( MATCHES_FILE($string_to_examine, "expected.regex.txt") );
123
124       Another thing you are likely to want to test is code raising exceptions
125       with die().  The DIED() function confirms if a coderef raises an
126       exception:
127
128               ASSERT( DIED(
129                       sub {
130                               $object_to_test->method(@bad_inputs);
131                       }
132               ));
133
134       The DIED routine doesn't clobber $@, so you can use this in your test
135       description:
136
137               ASSERT( DIED(
138                       sub {
139                               $object_to_test->method(@bad_inputs);
140                       }
141               ), "raises an exception - " . (chomp $@, $@));
142
143       Occasionally you'll want to check if a perl script simply compiles.
144       Whilst this is no substitute for writing a proper unit test for the
145       script, sometimes it's useful:
146
147               ASSERT( COMPILES("somescript.pl") );
148
149       An optional second argument forces the code to be compiled under
150       'strict':
151
152               ASSERT( COMPILES("somescript.pl", 1) );
153
154       (normally you'll have this in your script anyway).
155
156   Aggregating other tests together
157       For complex systems you may have a whole tree of unit tests,
158       corresponding to different areas of functionality of the system.  For
159       example, there may be a set of tests corresponding to the expression
160       evaluation sublanguage within a templating system.   Rather than simply
161       aggregating everything with Test::Harness in one flat list, you may
162       want to aggregate each subtree of related functionality so that the
163       Test::Harness summarisation is across these higher-level units.
164
165       Test::Assertions provides two functions to aggregate the output of
166       other tests.  These work on result strings (starting with "ok" or "not
167       ok").  ASSESS is the lower-level routine working directly on result
168       strings, ASSESS_FILE runs a unit test script and parses the output.  In
169       a scalar context they return a summary result string:
170
171               @results = ('ok 1', 'not ok 2', 'A comment', 'ok 3');
172               print scalar ASSESS(\@results);
173
174       would result in something like:
175
176               not ok (1 errors in 3 tests)
177
178       This output is of course a suitable input to ASSESS so complex
179       hierarchies may be created.  In an array context, they return a boolean
180       value and a description which is suitable for feeding into ASSERT
181       (although ASSERT's $;$ prototype means it will ignore the description)
182       :
183
184               ASSERT ASSESS_FILE("expr/set_1.t");
185               ASSERT ASSESS_FILE("expr/set_2.t");
186               ASSERT ASSESS_FILE("expr/set_3.t");
187
188       would generate output such as:
189
190               ok 1
191               ok 2
192               ok 3
193
194       Finally Test::Assertions provides a helper routine to interpret result
195       strings:
196
197               ($bool, $description) = INTERPRET("not ok 4 (test four)");
198
199       would result in:
200
201               $bool = 0;
202               $description = "test four";
203
204       which might be useful for writing your own custom collation code.
205

Using Test::Assertions for run-time checking

207       C programmers often use ASSERT macros to trap runtime "should never
208       happen" errors in their code.  You can use Test::Assertions to do this:
209
210               use Test::Assertions qq(die);
211               $rv = some_function();
212               ASSERT($rv == 0, "some_function returned a non-zero value");
213
214       You can also import Test::Assertions with warn rather than die so that
215       the code continues executing:
216
217               use constant ASSERTIONS_MODE => $ENV{ENVIRONMENT} eq 'production'? 'warn' : 'die';
218               use Test::Assertions(ASSERTIONS_MODE);
219
220       Environment variables provide a nice way of switching compile-time
221       behaviour from outside the process.
222
223   Minimising overhead
224       Importing Test::Assertions with no arguments results in ASSERT
225       statements doing nothing, but unlike ASSERT macros in C where the
226       preprocessor filters this out before compilation, there are 2 types of
227       residual overhead:
228
229       Runtime overhead
230           When Test::Assertions is imported with no arguments, the ASSERT
231           statement is aliased to an empty sub.  There is a small overhead in
232           executing this.  In practice, unless you do an ASSERT on every
233           other line, or in a performance-critical loop, you're unlikely to
234           notice the overhead compared to the other work that your code is
235           doing.
236
237       Compilation overhead
238           The Test::Assertions module must be compiled even when it is
239           imported with no arguments.  Test::Assertions loads its helper
240           modules on demand and avoids using pragmas to minimise its
241           compilation overhead.  Currently Test::Assertions does not go to
242           more extreme measures to cut its compilation overhead in the
243           interests of maintainability and ease of installation.
244
245       Both can be minimised by using a constant:
246
247               use constant ENABLE_ASSERTIONS => $ENV{ENABLE_ASSERTIONS};
248
249               #Minimise compile-time overhead
250               if(ENABLE_ASSERTIONS) {
251                       require Test::Assertions;
252                       import Test::Assertions qq(die);
253               }
254
255               $rv = some_function();
256
257               #Eliminate runtime overhead
258               ASSERT($rv == 0, "some_function returned a non-zero value") if(ENABLE_ASSERTIONS);
259
260       Unlike Carp::Assert, Test::Assertions does not come with a "built-in"
261       constant (DEBUG in the case of Carp::Assert).  Define your own
262       constant, attach it to your own compile-time logic (e.g. env vars) and
263       call it whatever you like.
264
265   How expensive is a null ASSERT?
266       Here's an indication of the overhead of calling ASSERT when
267       Test::Assertions is imported with no arguments.  A comparison is
268       included with Carp::Assert just to show that it's in the same ballpark
269       - we are not advocating one module over the other.  As outlined above,
270       using a constant to disable assertions is recommended in performance-
271       critical code.
272
273               #!/usr/local/bin/perl
274
275               use Benchmark;
276               use Test::Assertions;
277               use Carp::Assert;
278               use constant ENABLE_ASSERTIONS => 0;
279
280               #Compare null ASSERT to simple linear algebra statement
281               timethis(1e6, sub{
282                       ASSERT(1); #Test::Assertions
283               });
284               timethis(1e6, sub{
285                       assert(1); #Carp::Assert
286               });
287               timethis(1e6, sub{
288                       ASSERT(1) if ENABLE_ASSERTIONS;
289               });
290               timethis(1e6, sub{
291                       $x=$x*2 + 3;
292               });
293
294       Results on Sun E250 (with 2x400Mhz CPUs) running perl 5.6.1 on solaris
295       9:
296
297               Test::Assertions:           timethis 1000000:  3 wallclock secs ( 3.88 usr +  0.00 sys =  3.88 CPU) @ 257731.96/s (n=1000000)
298               Carp::Assert:               timethis 1000000:  6 wallclock secs ( 6.08 usr +  0.00 sys =  6.08 CPU) @ 164473.68/s (n=1000000)
299               Test::Assertions + const:   timethis 1000000: -1 wallclock secs ( 0.07 usr +  0.00 sys =  0.07 CPU) @ 14285714.29/s (n=1000000) (warning: too few iterations for a reliable count)
300               some algebra:               timethis 1000000:  1 wallclock secs ( 2.50 usr +  0.00 sys =  2.50 CPU) @ 400000.00/s (n=1000000)
301
302       Results for 1.7Ghz pentium M running activestate perl 5.6.1 on win XP:
303
304               Test::Assertions:           timethis 1000000:  0 wallclock secs ( 0.42 usr +  0.00 sys =  0.42 CPU) @ 2380952.38/s (n=1000000)
305               Carp::Assert:               timethis 1000000:  0 wallclock secs ( 0.57 usr +  0.00 sys =  0.57 CPU) @ 1751313.49/s (n=1000000)
306               Test::Assertions + const:   timethis 1000000: -1 wallclock secs (-0.02 usr +  0.00 sys = -0.02 CPU) @ -50000000.00/s (n=1000000) (warning: too few iterations for a reliable count)
307               some algebra:               timethis 1000000:  0 wallclock secs ( 0.50 usr +  0.00 sys =  0.50 CPU) @ 1996007.98/s (n=1000000)
308
309   How significant is the compile-time overhead?
310       Here's an indication of the compile-time overhead for Test::Assertions
311       v1.050 and Carp::Assert v0.18.  The cost of running import() is also
312       included.
313
314               #!/usr/local/bin/perl
315
316               use Benchmark;
317               use lib qw(../lib);
318
319               timethis(3e2, sub {
320                       require Test::Assertions;
321                       delete $INC{"Test/Assertions.pm"};
322               });
323
324               timethis(3e2, sub {
325                       require Test::Assertions;
326                       import Test::Assertions;
327                       delete $INC{"Test/Assertions.pm"};
328               });
329
330               timethis(3e2, sub {
331                       require Carp::Assert;
332                       delete $INC{"Carp/Assert.pm"};
333               });
334
335               timethis(3e2, sub {
336                       require Carp::Assert;
337                       import Carp::Assert;
338                       delete $INC{"Carp/Assert.pm"};
339               });
340
341       Results on Sun E250 (with 2x400Mhz CPUs) running perl 5.6.1 on solaris
342       9:
343
344               Test::Assertions:           timethis 300:  6 wallclock secs ( 6.19 usr +  0.10 sys =  6.29 CPU) @ 47.69/s (n=300)
345               Test::Assertions + import:  timethis 300:  7 wallclock secs ( 6.56 usr +  0.03 sys =  6.59 CPU) @ 45.52/s (n=300)
346               Carp::Assert:               timethis 300:  3 wallclock secs ( 2.47 usr +  0.32 sys =  2.79 CPU) @ 107.53/s (n=300)
347               Carp::Assert + import:      timethis 300: 41 wallclock secs (40.58 usr +  0.32 sys = 40.90 CPU) @  7.33/s (n=300)
348
349       Results for 1.7Ghz pentium M running activestate perl 5.6.1 on win XP:
350
351               Test::Assertions:           timethis 300:  2 wallclock secs ( 1.45 usr +  0.21 sys =  1.66 CPU) @ 180.51/s (n=300)
352               Test::Assertions + import:  timethis 300:  2 wallclock secs ( 1.58 usr +  0.29 sys =  1.87 CPU) @ 160.26/s (n=300)
353               Carp::Assert:               timethis 300:  1 wallclock secs ( 0.99 usr +  0.26 sys =  1.25 CPU) @ 239.62/s (n=300)
354               Carp::Assert + import:      timethis 300:  6 wallclock secs ( 5.42 usr +  0.38 sys =  5.80 CPU) @ 51.74/s (n=300)
355
356       If using a constant to control compilation is not to your liking, you
357       may want to experiment with SelfLoader or AutoLoader to cut down the
358       compilation overhead further by delaying compilation of some of the
359       subroutines in Test::Assertions (see SelfLoader and AutoLoader for more
360       information) until the first time they are used.
361

VERSION

363       $Revision: 1.10 $ on $Date: 2005/05/04 15:56:39 $
364

AUTHOR

366       John Alden <cpan _at_ bbc _dot_ co _dot_ uk>
367
368
369
370perl v5.30.0                      2019-07-26       Test::Assertions::Manual(3)
Impressum