1tcltest(n)                   Tcl Bundled Packages                   tcltest(n)
2
3
4
5______________________________________________________________________________
6

NAME

8       tcltest - Test harness support code and utilities
9

SYNOPSIS

11       package require tcltest ?2.5?
12
13       tcltest::test name description ?-option value ...?
14       tcltest::test name description ?constraints? body result
15
16       tcltest::loadTestedCommands
17       tcltest::makeDirectory name ?directory?
18       tcltest::removeDirectory name ?directory?
19       tcltest::makeFile contents name ?directory?
20       tcltest::removeFile name ?directory?
21       tcltest::viewFile name ?directory?
22       tcltest::cleanupTests ?runningMultipleTests?
23       tcltest::runAllTests
24
25       tcltest::configure
26       tcltest::configure -option
27       tcltest::configure -option value ?-option value ...?
28       tcltest::customMatch mode command
29       tcltest::testConstraint constraint ?value?
30       tcltest::outputChannel ?channelID?
31       tcltest::errorChannel ?channelID?
32       tcltest::interpreter ?interp?
33
34       tcltest::debug ?level?
35       tcltest::errorFile ?filename?
36       tcltest::limitConstraints ?boolean?
37       tcltest::loadFile ?filename?
38       tcltest::loadScript ?script?
39       tcltest::match ?patternList?
40       tcltest::matchDirectories ?patternList?
41       tcltest::matchFiles ?patternList?
42       tcltest::outputFile ?filename?
43       tcltest::preserveCore ?level?
44       tcltest::singleProcess ?boolean?
45       tcltest::skip ?patternList?
46       tcltest::skipDirectories ?patternList?
47       tcltest::skipFiles ?patternList?
48       tcltest::temporaryDirectory ?directory?
49       tcltest::testsDirectory ?directory?
50       tcltest::verbose ?level?
51
52       tcltest::test name description optionList
53       tcltest::bytestring string
54       tcltest::normalizeMsg msg
55       tcltest::normalizePath pathVar
56       tcltest::workingDirectory ?dir?
57______________________________________________________________________________
58

DESCRIPTION

60       The  tcltest  package  provides  several utility commands useful in the
61       construction of test suites for code instrumented to be run by  evalua‐
62       tion of Tcl commands.  Notably the built-in commands of the Tcl library
63       itself are tested by a test suite using the tcltest package.
64
65       All the commands provided by the tcltest package  are  defined  in  and
66       exported  from  the  ::tcltest  namespace, as indicated in the SYNOPSIS
67       above.  In the following sections, all commands will  be  described  by
68       their simple names, in the interest of brevity.
69
70       The  central  command  of tcltest is test that defines and runs a test.
71       Testing with test involves evaluation of a Tcl script and comparing the
72       result  to an expected result, as configured and controlled by a number
73       of options.  Several other commands provided by tcltest govern the con‐
74       figuration  of  test and the collection of many test commands into test
75       suites.
76
77       See CREATING TEST SUITES WITH TCLTEST below for an extended example  of
78       how to use the commands of tcltest to produce test suites for your Tcl-
79       enabled code.
80

COMMANDS

82       test name description ?-option value ...?
83              Defines and possibly runs a test with the name name and descrip‐
84              tion  description.   The name and description of a test are used
85              in messages reported by test during the test, as  configured  by
86              the options of tcltest.  The remaining option value arguments to
87              test define the test, including the scripts to run,  the  condi‐
88              tions  under  which  to  run  them, the expected result, and the
89              means by which the expected and actual results  should  be  com‐
90              pared.   See TESTS below for a complete description of the valid
91              options and how they define a test.  The test command returns an
92              empty string.
93
94       test name description ?constraints? body result
95              This form of test is provided to support test suites written for
96              version 1 of the tcltest package, and also a  simpler  interface
97              for  a  common  usage.  It is the same as “test name description
98              -constraints constraints -body body -result result”.  All  other
99              options  to test take their default values.  When constraints is
100              omitted, this form of test can be distinguished from  the  first
101              because all options begin with “-”.
102
103       loadTestedCommands
104              Evaluates  in  the caller's context the script specified by con‐
105              figure -load or configure -loadfile.  Returns the result of that
106              script  evaluation,  including  any  error raised by the script.
107              Use this command and the related configuration options  to  pro‐
108              vide  the  commands  to be tested to the interpreter running the
109              test suite.
110
111       makeFile contents name ?directory?
112              Creates a file named name relative to  directory  directory  and
113              write  contents to that file using the encoding encoding system.
114              If contents does not end with  a  newline,  a  newline  will  be
115              appended  so  that  the file named name does end with a newline.
116              Because the system encoding is used, this command is only  suit‐
117              able  for  making  text  files.  The file will be removed by the
118              next evaluation of cleanupTests, unless it is removed by remove‐
119              File  first.   The  default  value of directory is the directory
120              configure -tmpdir.  Returns the full path of the  file  created.
121              Use this command to create any text file required by a test with
122              contents as needed.
123
124       removeFile name ?directory?
125              Forces the file referenced by name to  be  removed.   This  file
126              name  should  be  relative  to directory.   The default value of
127              directory is the directory configure -tmpdir.  Returns an  empty
128              string.  Use this command to delete files created by makeFile.
129
130       makeDirectory name ?directory?
131              Creates  a directory named name relative to directory directory.
132              The  directory  will  be  removed  by  the  next  evaluation  of
133              cleanupTests,  unless  it  is  removed by removeDirectory first.
134              The default  value  of  directory  is  the  directory  configure
135              -tmpdir.   Returns  the full path of the directory created.  Use
136              this command to create any  directories  that  are  required  to
137              exist by a test.
138
139       removeDirectory name ?directory?
140              Forces  the  directory  referenced  by  name to be removed. This
141              directory should be relative to directory.  The default value of
142              directory  is the directory configure -tmpdir.  Returns an empty
143              string.  Use this command to delete any directories  created  by
144              makeDirectory.
145
146       viewFile file ?directory?
147              Returns the contents of file, except for any final newline, just
148              as read -nonewline would return.  This file name should be rela‐
149              tive to directory.  The default value of directory is the direc‐
150              tory configure -tmpdir.  Use this command as a convenient way to
151              turn  the contents of a file generated by a test into the result
152              of that test for matching against an expected result.  The  con‐
153              tents  of  the  file  are read using the system encoding, so its
154              usefulness is limited to text files.
155
156       cleanupTests
157              Intended to clean up and summarize after several tests have been
158              run.   Typically  called  once  per test file, at the end of the
159              file after all tests have been completed.  For  best  effective‐
160              ness,  be  sure  that  the  cleanupTests is evaluated even if an
161              error occurs earlier in the test file evaluation.
162
163              Prints statistics about the tests run  and  removes  files  that
164              were  created  by  makeDirectory  and  makeFile  since  the last
165              cleanupTests.  Names of files and directories in  the  directory
166              configure  -tmpdir  created since the last cleanupTests, but not
167              created by makeFile or makeDirectory are printed to  outputChan‐
168              nel.  This command also restores the original shell environment,
169              as described by the global env array. Returns an empty string.
170
171       runAllTests
172              This is a master command meant to run an entire suite of  tests,
173              spanning  multiple  files and/or directories, as governed by the
174              configurable options of tcltest.  See RUNNING  ALL  TESTS  below
175              for  a complete description of the many variations possible with
176              runAllTests.
177
178   CONFIGURATION COMMANDS
179       configure
180              Returns the list of configurable options supported  by  tcltest.
181              See  CONFIGURABLE  OPTIONS  below  for the full list of options,
182              their valid values, and their effect on tcltest operations.
183
184       configure option
185              Returns the current value of the supported  configurable  option
186              option.   Raises  an  error if option is not a supported config‐
187              urable option.
188
189       configure option value ?-option value ...?
190              Sets the value of each configurable option option to the  corre‐
191              sponding value value, in order.  Raises an error if an option is
192              not a supported configurable option, or if value is not a  valid
193              value  for  the  corresponding option, or if a value is not pro‐
194              vided.  When an error is raised, the operation of  configure  is
195              halted, and subsequent option value arguments are not processed.
196
197              If  the  environment variable ::env(TCLTEST_OPTIONS) exists when
198              the tcltest package is loaded (by package require tcltest)  then
199              its  value is taken as a list of arguments to pass to configure.
200              This allows the default values of the configuration  options  to
201              be set by the environment.
202
203       customMatch mode script
204              Registers  mode  as  a  new  legal value of the -match option to
205              test.  When the -match mode option is passed to test, the script
206              script  will be evaluated to compare the actual result of evalu‐
207              ating the body of the test to the expected result.   To  perform
208              the  match,  the  script is completed with two additional words,
209              the expected result, and the actual result,  and  the  completed
210              script  is  evaluated  in  the  global namespace.  The completed
211              script is expected to return a boolean value indicating  whether
212              or  not  the results match.  The built-in matching modes of test
213              are exact, glob, and regexp.
214
215       testConstraint constraint ?boolean?
216              Sets or returns the boolean value associated with the named con‐
217              straint.  See TEST CONSTRAINTS below for more information.
218
219       interpreter ?executableName?
220              Sets  or  returns  the  name  of  the executable to be execed by
221              runAllTests to run each test file when configure -singleproc  is
222              false.   The  default  value  for interpreter is the name of the
223              currently running program as returned by info nameofexecutable.
224
225       outputChannel ?channelID?
226              Sets or returns the output channel ID.  This defaults to stdout.
227              Any test that prints test related output should send that output
228              to outputChannel rather than letting that output default to std‐
229              out.
230
231       errorChannel ?channelID?
232              Sets  or returns the error channel ID.  This defaults to stderr.
233              Any test that prints error messages should send that  output  to
234              errorChannel rather than printing directly to stderr.
235
236   SHORTCUT CONFIGURATION COMMANDS
237       debug ?level?
238              Same as “configure -debug ?level?”.
239
240       errorFile ?filename?
241              Same as “configure -errfile ?filename?”.
242
243       limitConstraints ?boolean?
244              Same as “configure -limitconstraints ?boolean?”.
245
246       loadFile ?filename?
247              Same as “configure -loadfile ?filename?”.
248
249       loadScript ?script?
250              Same as “configure -load ?script?”.
251
252       match ?patternList?
253              Same as “configure -match ?patternList?”.
254
255       matchDirectories ?patternList?
256              Same as “configure -relateddir ?patternList?”.
257
258       matchFiles ?patternList?
259              Same as “configure -file ?patternList?”.
260
261       outputFile ?filename?
262              Same as “configure -outfile ?filename?”.
263
264       preserveCore ?level?
265              Same as “configure -preservecore ?level?”.
266
267       singleProcess ?boolean?
268              Same as “configure -singleproc ?boolean?”.
269
270       skip ?patternList?
271              Same as “configure -skip ?patternList?”.
272
273       skipDirectories ?patternList?
274              Same as “configure -asidefromdir ?patternList?”.
275
276       skipFiles ?patternList?
277              Same as “configure -notfile ?patternList?”.
278
279       temporaryDirectory ?directory?
280              Same as “configure -tmpdir ?directory?”.
281
282       testsDirectory ?directory?
283              Same as “configure -testdir ?directory?”.
284
285       verbose ?level?
286              Same as “configure -verbose ?level?”.
287
288   OTHER COMMANDS
289       The  remaining  commands  provided  by tcltest have better alternatives
290       provided by tcltest or Tcl itself.  They are retained to support exist‐
291       ing test suites, but should be avoided in new code.
292
293       test name description optionList
294              This  form  of  test was provided to enable passing many options
295              spanning several lines to test as a single  argument  quoted  by
296              braces,  rather  than  needing  to  backslash quote the newlines
297              between arguments to test.  The optionList argument is  expected
298              to be a list with an even number of elements representing option
299              and value arguments to pass to test.  However, these values  are
300              not  passed  directly,  as  in  the  alternate  forms of switch.
301              Instead, this form makes an  unfortunate  attempt  to  overthrow
302              Tcl's  substitution rules by performing substitutions on some of
303              the list elements as an attempt to implement a “do what I  mean”
304              interpretation  of  a  brace-enclosed  “block”.   The  result is
305              nearly impossible to document clearly, and for that reason  this
306              form  is  not  recommended.   See  the examples in CREATING TEST
307              SUITES WITH TCLTEST below to see that this form  is  really  not
308              necessary  to avoid backslash-quoted newlines.  If you insist on
309              using this form, examine the source code of tcltest if you  want
310              to  know  the  substitution  details,  or just enclose the third
311              through last argument to test in braces and hope for the best.
312
313       workingDirectory ?directoryName?
314              Sets or returns the current  working  directory  when  the  test
315              suite is running.  The default value for workingDirectory is the
316              directory in which the test suite was launched.   The  Tcl  com‐
317              mands cd and pwd are sufficient replacements.
318
319       normalizeMsg msg
320              Returns  the  result  of removing the “extra” newlines from msg,
321              where “extra” is rather imprecise.  Tcl offers plenty of  string
322              processing  commands  to modify strings as you wish, and custom‐
323              Match allows flexible matching of actual and expected results.
324
325       normalizePath pathVar
326              Resolves symlinks in a path, thus creating a path without inter‐
327              nal redirection.  It is assumed that pathVar is absolute.  path‐
328              Var is modified in place.  The Tcl command file normalize  is  a
329              sufficient replacement.
330
331       bytestring string
332              Construct  a  string  that consists of the requested sequence of
333              bytes, as opposed to a string of properly formed  UTF-8  charac‐
334              ters using the value supplied in string.  This allows the tester
335              to create denormalized or improperly formed strings to pass to C
336              procedures  that  are  supposed  to accept strings with embedded
337              NULL types and confirm that a string result has a  certain  pat‐
338              tern  of  bytes.   This is exactly equivalent to the Tcl command
339              encoding convertfrom identity.
340

TESTS

342       The test command is the heart of the tcltest  package.   Its  essential
343       function  is  to  evaluate  a Tcl script and compare the result with an
344       expected result.  The options of test define the test script, the envi‐
345       ronment  in which to evaluate it, the expected result, and how the com‐
346       pare the actual result to  the  expected  result.   Some  configuration
347       options of tcltest also influence how test operates.
348
349       The valid options for test are summarized:
350
351              test name description
352                      ?-constraints keywordList|expression?
353                      ?-setup setupScript?
354                      ?-body testScript?
355                      ?-cleanup cleanupScript?
356                      ?-result expectedAnswer?
357                      ?-output expectedOutput?
358                      ?-errorOutput expectedError?
359                      ?-returnCodes codeList?
360                      ?-errorCode expectedErrorCode?
361                      ?-match mode?
362
363       The  name  may  be  any  string.   It  is conventional to choose a name
364       according to the pattern:
365
366              target-majorNum.minorNum
367
368       For white-box (regression) tests, the target should be the name of  the
369       C  function  or  Tcl  procedure being tested.  For black-box tests, the
370       target should be the name of the feature being  tested.   Some  conven‐
371       tions  call  for  the  names of black-box tests to have the suffix _bb.
372       Related tests should share a major number.  As a test suite evolves, it
373       is  best  to have the same test name continue to correspond to the same
374       test, so that it remains meaningful to say things  like  “Test  foo-1.3
375       passed in all releases up to 3.4, but began failing in release 3.5.”
376
377       During  evaluation  of  test, the name will be compared to the lists of
378       string matching patterns returned by configure  -match,  and  configure
379       -skip.   The  test will be run only if name matches any of the patterns
380       from configure -match and matches none of the patterns  from  configure
381       -skip.
382
383       The description should be a short textual description of the test.  The
384       description is included in output produced by the test, typically  test
385       failure  messages.   Good description values should briefly explain the
386       purpose of the test to users of a test suite.  The name of a Tcl  or  C
387       function being tested should be included in the description for regres‐
388       sion tests.  If the test case exists to reproduce a  bug,  include  the
389       bug ID in the description.
390
391       Valid attributes and associated values are:
392
393       -constraints keywordList|expression
394              The  optional  -constraints attribute can be list of one or more
395              keywords or an expression.  If the -constraints value is a  list
396              of keywords, each of these keywords should be the name of a con‐
397              straint defined by a call to  testConstraint.   If  any  of  the
398              listed  constraints  is  false  or  does  not exist, the test is
399              skipped.  If the  -constraints  value  is  an  expression,  that
400              expression  is  evaluated.  If the expression evaluates to true,
401              then the test is run.  Note that the expression  form  of  -con‐
402              straints  may  interfere  with  the operation of configure -con‐
403              straints and configure  -limitconstraints,  and  is  not  recom‐
404              mended.   Appropriate  constraints  should be added to any tests
405              that should not always be run.  That is, conditional  evaluation
406              of a test should be accomplished by the -constraints option, not
407              by conditional evaluation of test.  In that way, the same number
408              of  tests are always reported by the test suite, though the num‐
409              ber skipped may change based on the  testing  environment.   The
410              default  value is an empty list.  See TEST CONSTRAINTS below for
411              a list of built-in constraints and information  on  how  to  add
412              your own constraints.
413
414       -setup script
415              The  optional  -setup  attribute indicates a script that will be
416              run before the script indicated  by  the  -body  attribute.   If
417              evaluation  of  script raises an error, the test will fail.  The
418              default value is an empty script.
419
420       -body script
421              The -body attribute indicates the script to run to carry out the
422              test,  which  must  return a result that can be checked for cor‐
423              rectness.  If evaluation of script raises  an  error,  the  test
424              will  fail (unless the -returnCodes option is used to state that
425              an error is expected).  The default value is an empty script.
426
427       -cleanup script
428              The optional -cleanup attribute indicates a script that will  be
429              run after the script indicated by the -body attribute.  If eval‐
430              uation of script raises an  error,  the  test  will  fail.   The
431              default value is an empty script.
432
433       -match mode
434              The -match attribute determines how expected answers supplied by
435              -result, -output, and -errorOutput are compared.   Valid  values
436              for  mode are regexp, glob, exact, and any value registered by a
437              prior call to customMatch.  The default value is exact.
438
439       -result expectedValue
440              The -result attribute supplies the expectedValue  against  which
441              the return value from script will be compared. The default value
442              is an empty string.
443
444       -output expectedValue
445              The -output attribute supplies the expectedValue  against  which
446              any  output sent to stdout or outputChannel during evaluation of
447              the script(s) will be compared.  Note that only  output  printed
448              using  the global puts command is used for comparison.  If -out‐
449              put is not specified, output sent to stdout and outputChannel is
450              not processed for comparison.
451
452       -errorOutput expectedValue
453              The  -errorOutput  attribute  supplies the expectedValue against
454              which any output sent to stderr or errorChannel  during  evalua‐
455              tion  of  the  script(s) will be compared. Note that only output
456              printed using the global puts command is  used  for  comparison.
457              If  -errorOutput  is  not  specified,  output sent to stderr and
458              errorChannel is not processed for comparison.
459
460       -returnCodes expectedCodeList
461              The optional -returnCodes attribute supplies expectedCodeList, a
462              list of return codes that may be accepted from evaluation of the
463              -body script.  If evaluation of the -body script returns a  code
464              not  in  the expectedCodeList, the test fails.  All return codes
465              known to return, in both numeric and  symbolic  form,  including
466              extended  return codes, are acceptable elements in the expected‐
467              CodeList.  Default value is “ok return”.
468
469       -errorCode expectedErrorCode
470              The optional -errorCode attribute supplies expectedErrorCode,  a
471              glob  pattern  that  should  match  the error code reported from
472              evaluation of the -body script.   If  evaluation  of  the  -body
473              script  returns  a code not matching expectedErrorCode, the test
474              fails.  Default value is “*”.  If -returnCodes does not  include
475              error it is set to error.
476
477       To  pass,  a  test  must  successfully  evaluate its -setup, -body, and
478       -cleanup scripts.  The return code of the -body script and  its  result
479       must  match  expected  values,  and if specified, output and error data
480       from the test must match expected -output and -errorOutput values.   If
481       any  of  these  conditions are not met, then the test fails.  Note that
482       all scripts are evaluated in the context of the caller of test.
483
484       As long as test is called with valid syntax and legal  values  for  all
485       attributes,  it  will  not  raise  an error.  Test failures are instead
486       reported as output written to outputChannel.  In default  operation,  a
487       successful  test  produces  no output.  The output messages produced by
488       test are controlled by the configure -verbose option  as  described  in
489       CONFIGURABLE  OPTIONS  below.   Any output produced by the test scripts
490       themselves should be produced using puts to outputChannel or errorChan‐
491       nel, so that users of the test suite may easily capture output with the
492       configure -outfile and configure -errfile  options,  and  so  that  the
493       -output and -errorOutput attributes work properly.
494
495   TEST CONSTRAINTS
496       Constraints  are  used  to  determine  whether  or not a test should be
497       skipped.  Each constraint has a name, which may be any  string,  and  a
498       boolean  value.   Each test has a -constraints value which is a list of
499       constraint names.  There are two modes  of  constraint  control.   Most
500       frequently, the default mode is used, indicated by a setting of config‐
501       ure -limitconstraints to false.  The test will run  only  if  all  con‐
502       straints in the list are true-valued.  Thus, the -constraints option of
503       test is a convenient, symbolic way to define  any  conditions  required
504       for  the  test  to be possible or meaningful.  For example, a test with
505       -constraints unix will only be run if  the  constraint  unix  is  true,
506       which indicates the test suite is being run on a Unix platform.
507
508       Each  test  should  include  whatever -constraints are required to con‐
509       strain it to run only where appropriate.  Several constraints are  pre-
510       defined  in  the  tcltest  package,  listed below.  The registration of
511       user-defined constraints is performed by  the  testConstraint  command.
512       User-defined  constraints  may appear within a test file, or within the
513       script specified by the configure -load or configure -loadfile options.
514
515       The following is a list of constraints pre-defined by the tcltest pack‐
516       age itself:
517
518       singleTestInterp
519              This  test  can only be run if all test files are sourced into a
520              single interpreter.
521
522       unix   This test can only be run on any Unix platform.
523
524       win    This test can only be run on any Windows platform.
525
526       nt     This test can only be run on any Windows NT platform.
527
528       mac    This test can only be run on any Mac platform.
529
530       unixOrWin
531              This test can only be run on a Unix or Windows platform.
532
533       macOrWin
534              This test can only be run on a Mac or Windows platform.
535
536       macOrUnix
537              This test can only be run on a Mac or Unix platform.
538
539       tempNotWin
540              This test can not be run on Windows.  This flag is used to  tem‐
541              porarily disable a test.
542
543       tempNotMac
544              This  test can not be run on a Mac.  This flag is used to tempo‐
545              rarily disable a test.
546
547       unixCrash
548              This test crashes if it is run on Unix.  This flag  is  used  to
549              temporarily disable a test.
550
551       winCrash
552              This test crashes if it is run on Windows.  This flag is used to
553              temporarily disable a test.
554
555       macCrash
556              This test crashes if it is run on a Mac.  This flag is  used  to
557              temporarily disable a test.
558
559       emptyTest
560              This  test is empty, and so not worth running, but it remains as
561              a place-holder for a test to be written  in  the  future.   This
562              constraint  has  value false to cause tests to be skipped unless
563              the user specifies otherwise.
564
565       knownBug
566              This test is known to fail and the bug is not yet  fixed.   This
567              constraint  has  value false to cause tests to be skipped unless
568              the user specifies otherwise.
569
570       nonPortable
571              This test can only be run in some known development environment.
572              Some  tests  are  inherently non-portable because they depend on
573              things like word length, file system configuration, window  man‐
574              ager, etc.  This constraint has value false to cause tests to be
575              skipped unless the user specifies otherwise.
576
577       userInteraction
578              This test requires interaction from the user.   This  constraint
579              has  value  false  to causes tests to be skipped unless the user
580              specifies otherwise.
581
582       interactive
583              This test can only be run in if the interpreter is  in  interac‐
584              tive  mode  (when  the global tcl_interactive variable is set to
585              1).
586
587       nonBlockFiles
588              This test can only be run if  platform  supports  setting  files
589              into nonblocking mode.
590
591       asyncPipeClose
592              This  test  can only be run if platform supports async flush and
593              async close on a pipe.
594
595       unixExecs
596              This test can only be run if this machine  has  Unix-style  com‐
597              mands  cat, echo, sh, wc, rm, sleep, fgrep, ps, chmod, and mkdir
598              available.
599
600       hasIsoLocale
601              This test can only be run if can switch to an ISO locale.
602
603       root   This test can only run if Unix user is root.
604
605       notRoot
606              This test can only run if Unix user is not root.
607
608       eformat
609              This test can only run if app has a working version  of  sprintf
610              with respect to the “e” format of floating-point numbers.
611
612       stdio  This  test  can  only  be  run if interpreter can be opened as a
613              pipe.
614
615       The alternative mode of constraint control is enabled by  setting  con‐
616       figure -limitconstraints to true.  With that configuration setting, all
617       existing constraints other than those in the constraint  list  returned
618       by  configure -constraints are set to false.  When the value of config‐
619       ure -constraints is set, all those constraints are set  to  true.   The
620       effect  is  that when both options configure -constraints and configure
621       -limitconstraints are in use, only  those  tests  including  only  con‐
622       straints  from  the configure -constraints list are run; all others are
623       skipped.  For example, one might set up a configuration with
624
625              configure -constraints knownBug \
626                        -limitconstraints true \
627                        -verbose pass
628
629       to run exactly those tests  that  exercise  known  bugs,  and  discover
630       whether any of them pass, indicating the bug had been fixed.
631
632   RUNNING ALL TESTS
633       The  single  command  runAllTests  is  evaluated  to run an entire test
634       suite, spanning many files and directories.  The configuration  options
635       of  tcltest  control  the  precise operations.  The runAllTests command
636       begins by printing a summary of its configuration to outputChannel.
637
638       Test files to be evaluated are sought in the directory configure -test‐
639       dir.   The  list  of files in that directory that match any of the pat‐
640       terns in configure -file and match none of the  patterns  in  configure
641       -notfile  is generated and sorted.  Then each file will be evaluated in
642       turn.  If configure -singleproc is true, then each file will be sourced
643       in  the  caller's  context.  If it is false, then a copy of interpreter
644       will be exec'd to evaluate each file.  The multi-process  operation  is
645       useful  when  testing  can cause errors so severe that a process termi‐
646       nates.  Although such an error may terminate a child process evaluating
647       one  file,  the  master  process can continue with the rest of the test
648       suite.  In multi-process operation, the configuration of tcltest in the
649       master  process  is passed to the child processes as command line argu‐
650       ments, with the exception of configure -outfile.  The runAllTests  com‐
651       mand in the master process collects all output from the child processes
652       and collates their results into one  master  report.   Any  reports  of
653       individual test failures, or messages requested by a configure -verbose
654       setting are passed directly on to outputChannel by the master process.
655
656       After evaluating all selected test files, a summary of the  results  is
657       printed  to  outputChannel.   The  summary includes the total number of
658       tests evaluated, broken down into  those  skipped,  those  passed,  and
659       those  failed.   The  summary also notes the number of files evaluated,
660       and the names of any files with failing tests or errors.  A list of the
661       constraints  that  caused  tests to be skipped, and the number of tests
662       skipped for each is also printed.  Also, messages  are  printed  if  it
663       appears  that  evaluation of a test file has caused any temporary files
664       to be left behind in configure -tmpdir.
665
666       Having completed and summarized all selected  test  files,  runAllTests
667       then  recursively  acts  on  subdirectories of configure -testdir.  All
668       subdirectories that match any of the patterns in configure  -relateddir
669       and  do  not  match  any of the patterns in configure -asidefromdir are
670       examined.  If a file named all.tcl is found in  such  a  directory,  it
671       will  be  sourced  in the caller's context.  Whether or not an examined
672       directory contains an all.tcl file, its subdirectories are also scanned
673       against the configure -relateddir and configure -asidefromdir patterns.
674       In this way, many directories in a directory tree can  have  all  their
675       test files evaluated by a single runAllTests command.
676

CONFIGURABLE OPTIONS

678       The configure command is used to set and query the configurable options
679       of tcltest.  The valid options are:
680
681       -singleproc boolean
682              Controls whether or not runAllTests spawns a child  process  for
683              each  test  file.   No  spawning  when boolean is true.  Default
684              value is false.
685
686       -debug level
687              Sets the debug level to level, an integer value  indicating  how
688              much  debugging  information  should be printed to stdout.  Note
689              that debug messages always go  to  stdout,  independent  of  the
690              value  of  configure  -outfile.  Default value is 0.  Levels are
691              defined as:
692
693              0   Do not display any debug information.
694
695              1   Display information regarding  whether  a  test  is  skipped
696                  because  it does not match any of the tests that were speci‐
697                  fied using by configure  -match  (userSpecifiedNonMatch)  or
698                  matches any of the tests specified by configure -skip (user‐
699                  SpecifiedSkip).  Also print warnings about possible lack  of
700                  cleanup or balance in test files.  Also print warnings about
701                  any re-use of test names.
702
703              2   Display the flag array parsed by the command line processor,
704                  the  contents  of the global env array, and all user-defined
705                  variables that exist in the current namespace  as  they  are
706                  used.
707
708              3   Display  information  regarding what individual procs in the
709                  test harness are doing.
710
711       -verbose level
712              Sets the type of output verbosity desired to level,  a  list  of
713              zero  or  more  of  the elements body, pass, skip, start, error,
714              line, msec and usec.  Default value is “body error”.  Levels are
715              defined as:
716
717              body (b)
718                     Display the body of failed tests
719
720              pass (p)
721                     Print output when a test passes
722
723              skip (s)
724                     Print output when a test is skipped
725
726              start (t)
727                     Print output whenever a test starts
728
729              error (e)
730                     Print errorInfo and errorCode, if they exist, when a test
731                     return code does not match its expected return code
732
733              line (l)
734                     Print source file line information of failed tests
735
736              msec (m)
737                     Print each test's execution time in milliseconds
738
739              usec (u)
740                     Print each test's execution time in microseconds
741
742              Note that the msec and usec verbosity  levels  are  provided  as
743              indicative  measures  only.  They  do  not tackle the problem of
744              repeatibility which should be considered in performance tests or
745              benchmarks.  To  use  these verbosity levels to thoroughly track
746              performance degradations, consider  wrapping  your  test  bodies
747              with time commands.
748
749              The  single letter abbreviations noted above are also recognized
750              so that “configure -verbose pt” is the same as “configure  -ver‐
751              bose {pass start}”.
752
753       -preservecore level
754              Sets  the  core  preservation level to level.  This level deter‐
755              mines how stringent checks for core files are.  Default value is
756              0.  Levels are defined as:
757
758              0      No  checking  — do not check for core files at the end of
759                     each test command, but do check for them  in  runAllTests
760                     after all test files have been evaluated.
761
762              1      Also  check  for  core files at the end of each test com‐
763                     mand.
764
765              2      Check for core files at all times  described  above,  and
766                     save  a  copy  of  each  core  file produced in configure
767                     -tmpdir.
768
769       -limitconstraints boolean
770              Sets the mode by which test honors constraints as  described  in
771              TESTS above.  Default value is false.
772
773       -constraints list
774              Sets all the constraints in list to true.  Also used in combina‐
775              tion with configure -limitconstraints true to control an  alter‐
776              native  constraint  mode  as  described in TESTS above.  Default
777              value is an empty list.
778
779       -tmpdir directory
780              Sets the temporary directory to be used by makeFile,  makeDirec‐
781              tory,  viewFile,  removeFile, and removeDirectory as the default
782              directory where temporary files and directories created by  test
783              files should be created.  Default value is workingDirectory.
784
785       -testdir directory
786              Sets  the  directory  searched by runAllTests for test files and
787              subdirectories.  Default value is workingDirectory.
788
789       -file patternList
790              Sets the list of patterns used by runAllTests to determine  what
791              test files to evaluate.  Default value is “*.test”.
792
793       -notfile patternList
794              Sets  the list of patterns used by runAllTests to determine what
795              test files to skip.  Default value is “l.*.test”,  so  that  any
796              SCCS lock files are skipped.
797
798       -relateddir patternList
799              Sets  the list of patterns used by runAllTests to determine what
800              subdirectories to search for an all.tcl file.  Default value  is
801*”.
802
803       -asidefromdir patternList
804              Sets  the list of patterns used by runAllTests to determine what
805              subdirectories to skip  when  searching  for  an  all.tcl  file.
806              Default value is an empty list.
807
808       -match patternList
809              Set  the  list  of  patterns used by test to determine whether a
810              test should be run.  Default value is “*”.
811
812       -skip patternList
813              Set the list of patterns used by test  to  determine  whether  a
814              test should be skipped.  Default value is an empty list.
815
816       -load script
817              Sets  a  script  to be evaluated by loadTestedCommands.  Default
818              value is an empty script.
819
820       -loadfile filename
821              Sets the filename from which to read a script to be evaluated by
822              loadTestedCommands.  This is an alternative to -load.  They can‐
823              not be used together.
824
825       -outfile filename
826              Sets the file to which all output produced by tcltest should  be
827              written.   A file named filename will be opened for writing, and
828              the resulting channel will be set as the value of outputChannel.
829
830       -errfile filename
831              Sets the file to which all  error  output  produced  by  tcltest
832              should  be  written.   A  file named filename will be opened for
833              writing, and the resulting channel will be set as the  value  of
834              errorChannel.
835

CREATING TEST SUITES WITH TCLTEST

837       The fundamental element of a test suite is the individual test command.
838       We begin with several examples.
839
840       [1]    Test of a script that returns normally.
841
842                     test example-1.0 {normal return} {
843                         format %s value
844                     } value
845
846       [2]    Test of a script that requires context setup and cleanup.   Note
847              the  bracing  and  indenting style that avoids any need for line
848              continuation.
849
850                     test example-1.1 {test file existence} -setup {
851                         set file [makeFile {} test]
852                     } -body {
853                         file exists $file
854                     } -cleanup {
855                         removeFile test
856                     } -result 1
857
858       [3]    Test of a script that raises an error.
859
860                     test example-1.2 {error return} -body {
861                         error message
862                     } -returnCodes error -result message
863
864       [4]    Test with a constraint.
865
866                     test example-1.3 {user owns created files} -constraints {
867                         unix
868                     } -setup {
869                         set file [makeFile {} test]
870                     } -body {
871                         file attributes $file -owner
872                     } -cleanup {
873                         removeFile test
874                     } -result $::tcl_platform(user)
875
876       At the next higher layer of organization,  several  test  commands  are
877       gathered  together  into  a  single  test file.  Test files should have
878       names with the “.test” extension, because that is the  default  pattern
879       used  by runAllTests to find test files.  It is a good rule of thumb to
880       have one test file for each source code file of your  project.   It  is
881       good  practice to edit the test file and the source code file together,
882       keeping tests synchronized with code changes.
883
884       Most of the code in the test file should be  the  test  commands.   Use
885       constraints to skip tests, rather than conditional evaluation of test.
886
887       [5]    Recommended  system  for  writing  conditional tests, using con‐
888              straints to guard:
889
890                     testConstraint X [expr $myRequirement]
891                     test goodConditionalTest {} X {
892                         # body
893                     } result
894
895       [6]    Discouraged system for writing conditional tests,  using  if  to
896              guard:
897
898                     if $myRequirement {
899                         test badConditionalTest {} {
900                             #body
901                         } result
902                     }
903
904       Use  the  -setup and -cleanup options to establish and release all con‐
905       text requirements of the test body.  Do not make tests depend on  prior
906       tests  in  the  file.   Those prior tests might be skipped.  If several
907       consecutive tests require the same context, the appropriate  setup  and
908       cleanup  scripts  may  be  stored in variable for passing to each tests
909       -setup and -cleanup options.  This is a better solution than performing
910       setup  outside of test commands, because the setup will only be done if
911       necessary, and any errors during setup will be reported, and not  cause
912       the test file to abort.
913
914       A test file should be able to be combined with other test files and not
915       interfere with them, even when configure -singleproc 1 causes all files
916       to  be evaluated in a common interpreter.  A simple way to achieve this
917       is to have your tests define all their  commands  and  variables  in  a
918       namespace that is deleted when the test file evaluation is complete.  A
919       good namespace to use is a child namespace test of the namespace of the
920       module you are testing.
921
922       A  test  file should also be able to be evaluated directly as a script,
923       not depending on being called by a master runAllTests.  This means that
924       each test file should process command line arguments to give the tester
925       all the configuration control that tcltest provides.
926
927       After all tests in a test file,  the  command  cleanupTests  should  be
928       called.
929
930       [7]    Here  is  a  sketch  of  a  sample  test file illustrating those
931              points:
932
933                     package require tcltest 2.2
934                     eval ::tcltest::configure $argv
935                     package require example
936                     namespace eval ::example::test {
937                         namespace import ::tcltest::*
938                         testConstraint X [expr {...}]
939                         variable SETUP {#common setup code}
940                         variable CLEANUP {#common cleanup code}
941                         test example-1 {} -setup $SETUP -body {
942                             # First test
943                         } -cleanup $CLEANUP -result {...}
944                         test example-2 {} -constraints X -setup $SETUP -body {
945                             # Second test; constrained
946                         } -cleanup $CLEANUP -result {...}
947                         test example-3 {} {
948                             # Third test; no context required
949                         } {...}
950                         cleanupTests
951                     }
952                     namespace delete ::example::test
953
954       The next level of organization is a full test suite, made up of several
955       test files.  One script is used to control the entire suite.  The basic
956       function of this script is to call runAllTests after doing  any  neces‐
957       sary  setup.   This script is usually named all.tcl because that is the
958       default name used by runAllTests when combining  multiple  test  suites
959       into one testing run.
960
961       [8]    Here is a sketch of a sample test suite master script:
962
963                     package require Tcl 8.4
964                     package require tcltest 2.2
965                     package require example
966                     ::tcltest::configure -testdir \
967                             [file dirname [file normalize [info script]]]
968                     eval ::tcltest::configure $argv
969                     ::tcltest::runAllTests
970

COMPATIBILITY

972       A  number of commands and variables in the ::tcltest namespace provided
973       by earlier releases of tcltest have not been documented here.  They are
974       no  longer part of the supported public interface of tcltest and should
975       not be used in new test suites.  However, to continue to support exist‐
976       ing  test suites written to the older interface specifications, many of
977       those deprecated commands and variables  still  work  as  before.   For
978       example,  in many circumstances, configure will be automatically called
979       shortly after package require tcltest 2.1 succeeds with arguments  from
980       the variable ::argv.  This is to support test suites that depend on the
981       old behavior that tcltest was  automatically  configured  from  command
982       line  arguments.   New test files should not depend on this, but should
983       explicitly include
984
985              eval ::tcltest::configure $::argv
986
987       or
988
989              ::tcltest::configure {*}$::argv
990
991       to establish a configuration from command line arguments.
992

KNOWN ISSUES

994       There are two known issues related to nested evaluations of test.   The
995       first  issue  relates to the stack level in which test scripts are exe‐
996       cuted.  Tests nested within other tests may be  executed  at  the  same
997       stack level as the outermost test.  For example, in the following code:
998
999              test level-1.1 {level 1} {
1000                  -body {
1001                      test level-2.1 {level 2} {
1002                      }
1003                  }
1004              }
1005
1006       any  script  executed  in  level-2.1  may be executed at the same stack
1007       level as the script defined for level-1.1.
1008
1009       In addition, while two tests  have  been  run,  results  will  only  be
1010       reported by cleanupTests for tests at the same level as test level-1.1.
1011       However, test results for all tests run  prior  to  level-1.1  will  be
1012       available when test level-2.1 runs.  What this means is that if you try
1013       to access the test results for test level-2.1, it will may say that “m”
1014       tests  have run, “n” tests have been skipped, “o” tests have passed and
1015       “p” tests have failed, where “m”, “n”, “o”, and “p” refer to tests that
1016       were run at the same test level as test level-1.1.
1017
1018       Implementation  of  output  and  error  comparison  in the test command
1019       depends on usage of puts in your application code.   Output  is  inter‐
1020       cepted  by  redefining  the  global puts command while the defined test
1021       script is being run.  Errors thrown by C procedures or printed directly
1022       from C applications will not be caught by the test command.  Therefore,
1023       usage of the -output and -errorOutput options to test  is  useful  only
1024       for pure Tcl applications that use puts to produce output.
1025

KEYWORDS

1027       test, test harness, test suite
1028
1029
1030
1031tcltest                               2.5                           tcltest(n)
Impressum