1tcltest(n)                   Tcl Bundled Packages                   tcltest(n)
2
3
4
5______________________________________________________________________________
6

NAME

8       tcltest - Test harness support code and utilities
9

SYNOPSIS

11       package require tcltest ?2.3?
12
13       tcltest::test name description ?-option value ...?
14       tcltest::test name description ?constraints? body result
15
16       tcltest::loadTestedCommands
17       tcltest::makeDirectory name ?directory?
18       tcltest::removeDirectory name ?directory?
19       tcltest::makeFile contents name ?directory?
20       tcltest::removeFile name ?directory?
21       tcltest::viewFile name ?directory?
22       tcltest::cleanupTests ?runningMultipleTests?
23       tcltest::runAllTests
24
25       tcltest::configure
26       tcltest::configure -option
27       tcltest::configure -option value ?-option value ...?
28       tcltest::customMatch mode command
29       tcltest::testConstraint constraint ?value?
30       tcltest::outputChannel ?channelID?
31       tcltest::errorChannel ?channelID?
32       tcltest::interpreter ?interp?
33
34       tcltest::debug ?level?
35       tcltest::errorFile ?filename?
36       tcltest::limitConstraints ?boolean?
37       tcltest::loadFile ?filename?
38       tcltest::loadScript ?script?
39       tcltest::match ?patternList?
40       tcltest::matchDirectories ?patternList?
41       tcltest::matchFiles ?patternList?
42       tcltest::outputFile ?filename?
43       tcltest::preserveCore ?level?
44       tcltest::singleProcess ?boolean?
45       tcltest::skip ?patternList?
46       tcltest::skipDirectories ?patternList?
47       tcltest::skipFiles ?patternList?
48       tcltest::temporaryDirectory ?directory?
49       tcltest::testsDirectory ?directory?
50       tcltest::verbose ?level?
51
52       tcltest::test name description optionList
53       tcltest::bytestring string
54       tcltest::normalizeMsg msg
55       tcltest::normalizePath pathVar
56       tcltest::workingDirectory ?dir?
57______________________________________________________________________________
58

DESCRIPTION

60       The  tcltest  package  provides  several utility commands useful in the
61       construction of test suites for code instrumented to be run by  evalua‐
62       tion of Tcl commands.  Notably the built-in commands of the Tcl library
63       itself are tested by a test suite using the tcltest package.
64
65       All the commands provided by the tcltest package  are  defined  in  and
66       exported  from  the  ::tcltest  namespace, as indicated in the SYNOPSIS
67       above.  In the following sections, all commands will  be  described  by
68       their simple names, in the interest of brevity.
69
70       The  central  command  of tcltest is test that defines and runs a test.
71       Testing with test involves evaluation of a Tcl script and comparing the
72       result  to an expected result, as configured and controlled by a number
73       of options.  Several other commands provided by tcltest govern the con‐
74       figuration  of  test and the collection of many test commands into test
75       suites.
76
77       See CREATING TEST SUITES WITH TCLTEST below for an extended example  of
78       how to use the commands of tcltest to produce test suites for your Tcl-
79       enabled code.
80

COMMANDS

82       test name description ?-option value ...?
83              Defines and possibly runs a test with the name name and descrip‐
84              tion  description.   The name and description of a test are used
85              in messages reported by test during the test, as  configured  by
86              the options of tcltest.  The remaining option value arguments to
87              test define the test, including the scripts to run,  the  condi‐
88              tions  under  which  to  run  them, the expected result, and the
89              means by which the expected and actual results  should  be  com‐
90              pared.   See TESTS below for a complete description of the valid
91              options and how they define a test.  The test command returns an
92              empty string.
93
94       test name description ?constraints? body result
95              This form of test is provided to support test suites written for
96              version 1 of the tcltest package, and also a  simpler  interface
97              for  a  common  usage.  It is the same as “test name description
98              -constraints constraints -body body -result result”.  All  other
99              options  to test take their default values.  When constraints is
100              omitted, this form of test can be distinguished from  the  first
101              because all options begin with “-”.
102
103       loadTestedCommands
104              Evaluates  in  the caller's context the script specified by con‐
105              figure -load or configure -loadfile.  Returns the result of that
106              script  evaluation,  including  any  error raised by the script.
107              Use this command and the related configuration options  to  pro‐
108              vide  the  commands  to be tested to the interpreter running the
109              test suite.
110
111       makeFile contents name ?directory?
112              Creates a file named name relative to  directory  directory  and
113              write  contents to that file using the encoding encoding system.
114              If contents does not end with  a  newline,  a  newline  will  be
115              appended  so  that  the file named name does end with a newline.
116              Because the system encoding is used, this command is only  suit‐
117              able  for  making  text  files.  The file will be removed by the
118              next evaluation of cleanupTests, unless it is removed by remove‐
119              File  first.   The  default  value of directory is the directory
120              configure -tmpdir.  Returns the full path of the  file  created.
121              Use this command to create any text file required by a test with
122              contents as needed.
123
124       removeFile name ?directory?
125              Forces the file referenced by name to  be  removed.   This  file
126              name  should  be  relative  to directory.   The default value of
127              directory is the directory configure -tmpdir.  Returns an  empty
128              string.  Use this command to delete files created by makeFile.
129
130       makeDirectory name ?directory?
131              Creates  a directory named name relative to directory directory.
132              The  directory  will  be  removed  by  the  next  evaluation  of
133              cleanupTests,  unless  it  is  removed by removeDirectory first.
134              The default  value  of  directory  is  the  directory  configure
135              -tmpdir.   Returns  the full path of the directory created.  Use
136              this command to create any  directories  that  are  required  to
137              exist by a test.
138
139       removeDirectory name ?directory?
140              Forces  the  directory  referenced  by  name to be removed. This
141              directory should be relative to directory.  The default value of
142              directory  is the directory configure -tmpdir.  Returns an empty
143              string.  Use this command to delete any directories  created  by
144              makeDirectory.
145
146       viewFile file ?directory?
147              Returns the contents of file, except for any final newline, just
148              as read -nonewline would return.  This file name should be rela‐
149              tive to directory.  The default value of directory is the direc‐
150              tory configure -tmpdir.  Use this command as a convenient way to
151              turn  the contents of a file generated by a test into the result
152              of that test for matching against an expected result.  The  con‐
153              tents  of  the  file  are read using the system encoding, so its
154              usefulness is limited to text files.
155
156       cleanupTests
157              Intended to clean up and summarize after several tests have been
158              run.   Typically  called  once  per test file, at the end of the
159              file after all tests have been completed.  For  best  effective‐
160              ness,  be  sure  that  the  cleanupTests is evaluated even if an
161              error occurs earlier in the test file evaluation.
162
163              Prints statistics about the tests run  and  removes  files  that
164              were  created  by  makeDirectory  and  makeFile  since  the last
165              cleanupTests.  Names of files and directories in  the  directory
166              configure  -tmpdir  created since the last cleanupTests, but not
167              created by makeFile or makeDirectory are printed to  outputChan‐
168              nel.  This command also restores the original shell environment,
169              as described by the global env array. Returns an empty string.
170
171       runAllTests
172              This is a master command meant to run an entire suite of  tests,
173              spanning  multiple  files and/or directories, as governed by the
174              configurable options of tcltest.  See RUNNING  ALL  TESTS  below
175              for  a complete description of the many variations possible with
176              runAllTests.
177
178   CONFIGURATION COMMANDS
179       configure
180              Returns the list of configurable options supported  by  tcltest.
181              See  CONFIGURABLE  OPTIONS  below  for the full list of options,
182              their valid values, and their effect on tcltest operations.
183
184       configure option
185              Returns the current value of the supported  configurable  option
186              option.   Raises  an  error if option is not a supported config‐
187              urable option.
188
189       configure option value ?-option value ...?
190              Sets the value of each configurable option option to the  corre‐
191              sponding value value, in order.  Raises an error if an option is
192              not a supported configurable option, or if value is not a  valid
193              value  for  the  corresponding option, or if a value is not pro‐
194              vided.  When an error is raised, the operation of  configure  is
195              halted, and subsequent option value arguments are not processed.
196
197              If  the  environment variable ::env(TCLTEST_OPTIONS) exists when
198              the tcltest package is loaded (by package require tcltest)  then
199              its  value is taken as a list of arguments to pass to configure.
200              This allows the default values of the configuration  options  to
201              be set by the environment.
202
203       customMatch mode script
204              Registers  mode  as  a  new  legal value of the -match option to
205              test.  When the -match mode option is passed to test, the script
206              script  will be evaluated to compare the actual result of evalu‐
207              ating the body of the test to the expected result.   To  perform
208              the  match,  the  script is completed with two additional words,
209              the expected result, and the actual result,  and  the  completed
210              script  is  evaluated  in  the  global namespace.  The completed
211              script is expected to return a boolean value indicating  whether
212              or  not  the results match.  The built-in matching modes of test
213              are exact, glob, and regexp.
214
215       testConstraint constraint ?boolean?
216              Sets or returns the boolean value associated with the named con‐
217              straint.  See TEST CONSTRAINTS below for more information.
218
219       interpreter ?executableName?
220              Sets  or  returns  the  name  of  the executable to be execed by
221              runAllTests to run each test file when configure -singleproc  is
222              false.   The  default  value  for interpreter is the name of the
223              currently running program as returned by info nameofexecutable.
224
225       outputChannel ?channelID?
226              Sets or returns the output channel ID.  This defaults to stdout.
227              Any test that prints test related output should send that output
228              to outputChannel rather than letting that output default to std‐
229              out.
230
231       errorChannel ?channelID?
232              Sets  or returns the error channel ID.  This defaults to stderr.
233              Any test that prints error messages should send that  output  to
234              errorChannel rather than printing directly to stderr.
235
236   SHORTCUT CONFIGURATION COMMANDS
237       debug ?level?
238              Same as “configure -debug ?level?”.
239
240       errorFile ?filename?
241              Same as “configure -errfile ?filename?”.
242
243       limitConstraints ?boolean?
244              Same as “configure -limitconstraints ?boolean?”.
245
246       loadFile ?filename?
247              Same as “configure -loadfile ?filename?”.
248
249       loadScript ?script?
250              Same as “configure -load ?script?”.
251
252       match ?patternList?
253              Same as “configure -match ?patternList?”.
254
255       matchDirectories ?patternList?
256              Same as “configure -relateddir ?patternList?”.
257
258       matchFiles ?patternList?
259              Same as “configure -file ?patternList?”.
260
261       outputFile ?filename?
262              Same as “configure -outfile ?filename?”.
263
264       preserveCore ?level?
265              Same as “configure -preservecore ?level?”.
266
267       singleProcess ?boolean?
268              Same as “configure -singleproc ?boolean?”.
269
270       skip ?patternList?
271              Same as “configure -skip ?patternList?”.
272
273       skipDirectories ?patternList?
274              Same as “configure -asidefromdir ?patternList?”.
275
276       skipFiles ?patternList?
277              Same as “configure -notfile ?patternList?”.
278
279       temporaryDirectory ?directory?
280              Same as “configure -tmpdir ?directory?”.
281
282       testsDirectory ?directory?
283              Same as “configure -testdir ?directory?”.
284
285       verbose ?level?
286              Same as “configure -verbose ?level?”.
287
288   OTHER COMMANDS
289       The  remaining  commands  provided  by tcltest have better alternatives
290       provided by tcltest or Tcl itself.  They are retained to support exist‐
291       ing test suites, but should be avoided in new code.
292
293       test name description optionList
294              This  form  of  test was provided to enable passing many options
295              spanning several lines to test as a single  argument  quoted  by
296              braces,  rather  than  needing  to  backslash quote the newlines
297              between arguments to test.  The optionList argument is  expected
298              to be a list with an even number of elements representing option
299              and value arguments to pass to test.  However, these values  are
300              not  passed  directly,  as  in  the  alternate  forms of switch.
301              Instead, this form makes an  unfortunate  attempt  to  overthrow
302              Tcl's  substitution rules by performing substitutions on some of
303              the list elements as an attempt to implement a “do what I  mean”
304              interpretation  of  a  brace-enclosed  “block”.   The  result is
305              nearly impossible to document clearly, and for that reason  this
306              form  is  not  recommended.   See  the examples in CREATING TEST
307              SUITES WITH TCLTEST below to see that this form  is  really  not
308              necessary  to avoid backslash-quoted newlines.  If you insist on
309              using this form, examine the source code of tcltest if you  want
310              to  know  the  substitution  details,  or just enclose the third
311              through last argument to test in braces and hope for the best.
312
313       workingDirectory ?directoryName?
314              Sets or returns the current  working  directory  when  the  test
315              suite is running.  The default value for workingDirectory is the
316              directory in which the test suite was launched.   The  Tcl  com‐
317              mands cd and pwd are sufficient replacements.
318
319       normalizeMsg msg
320              Returns  the  result  of removing the “extra” newlines from msg,
321              where “extra” is rather imprecise.  Tcl offers plenty of  string
322              processing  commands  to modify strings as you wish, and custom‐
323              Match allows flexible matching of actual and expected results.
324
325       normalizePath pathVar
326              Resolves symlinks in a path, thus creating a path without inter‐
327              nal redirection.  It is assumed that pathVar is absolute.  path‐
328              Var is modified in place.  The Tcl command file normalize  is  a
329              sufficient replacement.
330
331       bytestring string
332              Construct  a  string  that consists of the requested sequence of
333              bytes, as opposed to a string of properly formed  UTF-8  charac‐
334              ters using the value supplied in string.  This allows the tester
335              to create denormalized or improperly formed strings to pass to C
336              procedures  that  are  supposed  to accept strings with embedded
337              NULL types and confirm that a string result has a  certain  pat‐
338              tern  of  bytes.   This is exactly equivalent to the Tcl command
339              encoding convertfrom identity.
340

TESTS

342       The test command is the heart of the tcltest  package.   Its  essential
343       function  is  to  evaluate  a Tcl script and compare the result with an
344       expected result.  The options of test define the test script, the envi‐
345       ronment  in which to evaluate it, the expected result, and how the com‐
346       pare the actual result to  the  expected  result.   Some  configuration
347       options of tcltest also influence how test operates.
348
349       The valid options for test are summarized:
350
351              test name description
352                      ?-constraints keywordList|expression?
353                      ?-setup setupScript?
354                      ?-body testScript?
355                      ?-cleanup cleanupScript?
356                      ?-result expectedAnswer?
357                      ?-output expectedOutput?
358                      ?-errorOutput expectedError?
359                      ?-returnCodes codeList?
360                      ?-match mode?
361
362       The  name  may  be  any  string.   It  is conventional to choose a name
363       according to the pattern:
364
365              target-majorNum.minorNum
366
367       For white-box (regression) tests, the target should be the name of  the
368       C  function  or  Tcl  procedure being tested.  For black-box tests, the
369       target should be the name of the feature being  tested.   Some  conven‐
370       tions  call  for  the  names of black-box tests to have the suffix _bb.
371       Related tests should share a major number.  As a test suite evolves, it
372       is  best  to have the same test name continue to correspond to the same
373       test, so that it remains meaningful to say things  like  “Test  foo-1.3
374       passed in all releases up to 3.4, but began failing in release 3.5.”
375
376       During  evaluation  of  test, the name will be compared to the lists of
377       string matching patterns returned by configure  -match,  and  configure
378       -skip.   The  test will be run only if name matches any of the patterns
379       from configure -match and matches none of the patterns  from  configure
380       -skip.
381
382       The description should be a short textual description of the test.  The
383       description is included in output produced by the test, typically  test
384       failure  messages.   Good description values should briefly explain the
385       purpose of the test to users of a test suite.  The name of a Tcl  or  C
386       function being tested should be included in the description for regres‐
387       sion tests.  If the test case exists to reproduce a  bug,  include  the
388       bug ID in the description.
389
390       Valid attributes and associated values are:
391
392       -constraints keywordList|expression
393              The  optional  -constraints attribute can be list of one or more
394              keywords or an expression.  If the -constraints value is a  list
395              of keywords, each of these keywords should be the name of a con‐
396              straint defined by a call to  testConstraint.   If  any  of  the
397              listed  constraints  is  false  or  does  not exist, the test is
398              skipped.  If the  -constraints  value  is  an  expression,  that
399              expression  is  evaluated.  If the expression evaluates to true,
400              then the test is run.  Note that the expression  form  of  -con‐
401              straints  may  interfere  with  the operation of configure -con‐
402              straints and configure  -limitconstraints,  and  is  not  recom‐
403              mended.   Appropriate  constraints  should be added to any tests
404              that should not always be run.  That is, conditional  evaluation
405              of a test should be accomplished by the -constraints option, not
406              by conditional evaluation of test.  In that way, the same number
407              of  tests are always reported by the test suite, though the num‐
408              ber skipped may change based on the  testing  environment.   The
409              default  value is an empty list.  See TEST CONSTRAINTS below for
410              a list of built-in constraints and information  on  how  to  add
411              your own constraints.
412
413       -setup script
414              The  optional  -setup  attribute indicates a script that will be
415              run before the script indicated  by  the  -body  attribute.   If
416              evaluation  of  script raises an error, the test will fail.  The
417              default value is an empty script.
418
419       -body script
420              The -body attribute indicates the script to run to carry out the
421              test,  which  must  return a result that can be checked for cor‐
422              rectness.  If evaluation of script raises  an  error,  the  test
423              will  fail (unless the -returnCodes option is used to state that
424              an error is expected).  The default value is an empty script.
425
426       -cleanup script
427              The optional -cleanup attribute indicates a script that will  be
428              run after the script indicated by the -body attribute.  If eval‐
429              uation of script raises an  error,  the  test  will  fail.   The
430              default value is an empty script.
431
432       -match mode
433              The -match attribute determines how expected answers supplied by
434              -result, -output, and -errorOutput are compared.   Valid  values
435              for  mode are regexp, glob, exact, and any value registered by a
436              prior call to customMatch.  The default value is exact.
437
438       -result expectedValue
439              The -result attribute supplies the expectedValue  against  which
440              the return value from script will be compared. The default value
441              is an empty string.
442
443       -output expectedValue
444              The -output attribute supplies the expectedValue  against  which
445              any  output sent to stdout or outputChannel during evaluation of
446              the script(s) will be compared.  Note that only  output  printed
447              using  the global puts command is used for comparison.  If -out‐
448              put is not specified, output sent to stdout and outputChannel is
449              not processed for comparison.
450
451       -errorOutput expectedValue
452              The  -errorOutput  attribute  supplies the expectedValue against
453              which any output sent to stderr or errorChannel  during  evalua‐
454              tion  of  the  script(s) will be compared. Note that only output
455              printed using the global puts command is  used  for  comparison.
456              If  -errorOutput  is  not  specified,  output sent to stderr and
457              errorChannel is not processed for comparison.
458
459       -returnCodes expectedCodeList
460              The optional -returnCodes attribute supplies expectedCodeList, a
461              list of return codes that may be accepted from evaluation of the
462              -body script.  If evaluation of the -body script returns a  code
463              not  in  the expectedCodeList, the test fails.  All return codes
464              known to return, in both numeric and  symbolic  form,  including
465              extended  return codes, are acceptable elements in the expected‐
466              CodeList.  Default value is “ok return”.
467
468       To pass, a test must  successfully  evaluate  its  -setup,  -body,  and
469       -cleanup  scripts.   The return code of the -body script and its result
470       must match expected values, and if specified,  output  and  error  data
471       from  the test must match expected -output and -errorOutput values.  If
472       any of these conditions are not met, then the test  fails.   Note  that
473       all scripts are evaluated in the context of the caller of test.
474
475       As  long  as  test is called with valid syntax and legal values for all
476       attributes, it will not raise an  error.   Test  failures  are  instead
477       reported  as  output written to outputChannel.  In default operation, a
478       successful test produces no output.  The output  messages  produced  by
479       test  are  controlled  by the configure -verbose option as described in
480       CONFIGURABLE OPTIONS below.  Any output produced by  the  test  scripts
481       themselves should be produced using puts to outputChannel or errorChan‐
482       nel, so that users of the test suite may easily capture output with the
483       configure  -outfile  and  configure  -errfile  options, and so that the
484       -output and -errorOutput attributes work properly.
485
486   TEST CONSTRAINTS
487       Constraints are used to determine whether  or  not  a  test  should  be
488       skipped.   Each  constraint  has a name, which may be any string, and a
489       boolean value.  Each test has a -constraints value which is a  list  of
490       constraint  names.   There  are  two modes of constraint control.  Most
491       frequently, the default mode is used, indicated by a setting of config‐
492       ure  -limitconstraints  to  false.   The test will run only if all con‐
493       straints in the list are true-valued.  Thus, the -constraints option of
494       test  is  a  convenient, symbolic way to define any conditions required
495       for the test to be possible or meaningful.  For example,  a  test  with
496       -constraints  unix  will  only  be  run if the constraint unix is true,
497       which indicates the test suite is being run on a Unix platform.
498
499       Each test should include whatever -constraints  are  required  to  con‐
500       strain  it to run only where appropriate.  Several constraints are pre-
501       defined in the tcltest package,  listed  below.   The  registration  of
502       user-defined  constraints  is  performed by the testConstraint command.
503       User-defined constraints may appear within a test file, or  within  the
504       script specified by the configure -load or configure -loadfile options.
505
506       The following is a list of constraints pre-defined by the tcltest pack‐
507       age itself:
508
509       singleTestInterp
510              This test can only be run if all test files are sourced  into  a
511              single interpreter.
512
513       unix   This test can only be run on any Unix platform.
514
515       win    This test can only be run on any Windows platform.
516
517       nt     This test can only be run on any Windows NT platform.
518
519       mac    This test can only be run on any Mac platform.
520
521       unixOrWin
522              This test can only be run on a Unix or Windows platform.
523
524       macOrWin
525              This test can only be run on a Mac or Windows platform.
526
527       macOrUnix
528              This test can only be run on a Mac or Unix platform.
529
530       tempNotWin
531              This  test can not be run on Windows.  This flag is used to tem‐
532              porarily disable a test.
533
534       tempNotMac
535              This test can not be run on a Mac.  This flag is used to  tempo‐
536              rarily disable a test.
537
538       unixCrash
539              This  test  crashes  if it is run on Unix.  This flag is used to
540              temporarily disable a test.
541
542       winCrash
543              This test crashes if it is run on Windows.  This flag is used to
544              temporarily disable a test.
545
546       macCrash
547              This  test  crashes if it is run on a Mac.  This flag is used to
548              temporarily disable a test.
549
550       emptyTest
551              This test is empty, and so not worth running, but it remains  as
552              a  place-holder  for  a  test to be written in the future.  This
553              constraint has value false to cause tests to be  skipped  unless
554              the user specifies otherwise.
555
556       knownBug
557              This  test  is known to fail and the bug is not yet fixed.  This
558              constraint has value false to cause tests to be  skipped  unless
559              the user specifies otherwise.
560
561       nonPortable
562              This test can only be run in some known development environment.
563              Some tests are inherently non-portable because  they  depend  on
564              things  like word length, file system configuration, window man‐
565              ager, etc.  This constraint has value false to cause tests to be
566              skipped unless the user specifies otherwise.
567
568       userInteraction
569              This  test  requires interaction from the user.  This constraint
570              has value false to causes tests to be skipped  unless  the  user
571              specifies otherwise.
572
573       interactive
574              This  test  can only be run in if the interpreter is in interac‐
575              tive mode (when the global tcl_interactive variable  is  set  to
576              1).
577
578       nonBlockFiles
579              This  test  can  only  be run if platform supports setting files
580              into nonblocking mode.
581
582       asyncPipeClose
583              This test can only be run if platform supports async  flush  and
584              async close on a pipe.
585
586       unixExecs
587              This  test  can  only be run if this machine has Unix-style com‐
588              mands cat, echo, sh, wc, rm, sleep, fgrep, ps, chmod, and  mkdir
589              available.
590
591       hasIsoLocale
592              This test can only be run if can switch to an ISO locale.
593
594       root   This test can only run if Unix user is root.
595
596       notRoot
597              This test can only run if Unix user is not root.
598
599       eformat
600              This  test  can only run if app has a working version of sprintf
601              with respect to the “e” format of floating-point numbers.
602
603       stdio  This test can only be run if interpreter  can  be  opened  as  a
604              pipe.
605
606       The  alternative  mode of constraint control is enabled by setting con‐
607       figure -limitconstraints to true.  With that configuration setting, all
608       existing  constraints  other than those in the constraint list returned
609       by configure -constraints are set to false.  When the value of  config‐
610       ure  -constraints  is  set, all those constraints are set to true.  The
611       effect is that when both options configure -constraints  and  configure
612       -limitconstraints  are  in  use,  only  those tests including only con‐
613       straints from the configure -constraints list are run; all  others  are
614       skipped.  For example, one might set up a configuration with
615
616              configure -constraints knownBug \
617                        -limitconstraints true \
618                        -verbose pass
619
620       to  run  exactly  those  tests  that  exercise known bugs, and discover
621       whether any of them pass, indicating the bug had been fixed.
622
623   RUNNING ALL TESTS
624       The single command runAllTests is  evaluated  to  run  an  entire  test
625       suite,  spanning many files and directories.  The configuration options
626       of tcltest control the precise  operations.   The  runAllTests  command
627       begins by printing a summary of its configuration to outputChannel.
628
629       Test files to be evaluated are sought in the directory configure -test‐
630       dir.  The list of files in that directory that match any  of  the  pat‐
631       terns  in  configure  -file and match none of the patterns in configure
632       -notfile is generated and sorted.  Then each file will be evaluated  in
633       turn.  If configure -singleproc is true, then each file will be sourced
634       in the caller's context.  If it is false, then a  copy  of  interpreter
635       will  be  exec'd to evaluate each file.  The multi-process operation is
636       useful when testing can cause errors so severe that  a  process  termi‐
637       nates.  Although such an error may terminate a child process evaluating
638       one file, the master process can continue with the  rest  of  the  test
639       suite.  In multi-process operation, the configuration of tcltest in the
640       master process is passed to the child processes as command  line  argu‐
641       ments,  with the exception of configure -outfile.  The runAllTests com‐
642       mand in the master process collects all output from the child processes
643       and  collates  their  results  into  one master report.  Any reports of
644       individual test failures, or messages requested by a configure -verbose
645       setting are passed directly on to outputChannel by the master process.
646
647       After  evaluating  all selected test files, a summary of the results is
648       printed to outputChannel.  The summary includes  the  total  number  of
649       tests  evaluated,  broken  down  into  those skipped, those passed, and
650       those failed.  The summary also notes the number  of  files  evaluated,
651       and the names of any files with failing tests or errors.  A list of the
652       constraints that caused tests to be skipped, and the  number  of  tests
653       skipped  for  each  is  also printed.  Also, messages are printed if it
654       appears that evaluation of a test file has caused any  temporary  files
655       to be left behind in configure -tmpdir.
656
657       Having  completed  and  summarized all selected test files, runAllTests
658       then recursively acts on subdirectories  of  configure  -testdir.   All
659       subdirectories  that match any of the patterns in configure -relateddir
660       and do not match any of the patterns  in  configure  -asidefromdir  are
661       examined.   If  a  file  named all.tcl is found in such a directory, it
662       will be sourced in the caller's context.  Whether or  not  an  examined
663       directory contains an all.tcl file, its subdirectories are also scanned
664       against the configure -relateddir and configure -asidefromdir patterns.
665       In  this  way,  many directories in a directory tree can have all their
666       test files evaluated by a single runAllTests command.
667

CONFIGURABLE OPTIONS

669       The configure command is used to set and query the configurable options
670       of tcltest.  The valid options are:
671
672       -singleproc boolean
673              Controls  whether  or not runAllTests spawns a child process for
674              each test file.  No spawning  when  boolean  is  true.   Default
675              value is false.
676
677       -debug level
678              Sets  the  debug level to level, an integer value indicating how
679              much debugging information should be printed  to  stdout.   Note
680              that  debug  messages  always  go  to stdout, independent of the
681              value of configure -outfile.  Default value is  0.   Levels  are
682              defined as:
683
684              0   Do not display any debug information.
685
686              1   Display  information  regarding  whether  a  test is skipped
687                  because it does not match any of the tests that were  speci‐
688                  fied  using  by  configure -match (userSpecifiedNonMatch) or
689                  matches any of the tests specified by configure -skip (user‐
690                  SpecifiedSkip).   Also print warnings about possible lack of
691                  cleanup or balance in test files.  Also print warnings about
692                  any re-use of test names.
693
694              2   Display the flag array parsed by the command line processor,
695                  the contents of the global env array, and  all  user-defined
696                  variables  that  exist  in the current namespace as they are
697                  used.
698
699              3   Display information regarding what individual procs  in  the
700                  test harness are doing.
701
702       -verbose level
703              Sets  the  type  of output verbosity desired to level, a list of
704              zero or more of the elements body,  pass,  skip,  start,  error,
705              line, msec and usec.  Default value is “body error”.  Levels are
706              defined as:
707
708              body (b)
709                     Display the body of failed tests
710
711              pass (p)
712                     Print output when a test passes
713
714              skip (s)
715                     Print output when a test is skipped
716
717              start (t)
718                     Print output whenever a test starts
719
720              error (e)
721                     Print errorInfo and errorCode, if they exist, when a test
722                     return code does not match its expected return code
723
724              line (l)
725                     Print source file line information of failed tests
726
727              msec (m)
728                     Print each test's execution time in milliseconds
729
730              usec (u)
731                     Print each test's execution time in microseconds
732
733              Note  that  the  msec  and usec verbosity levels are provided as
734              indicative measures only. They do  not  tackle  the  problem  of
735              repeatibility which should be considered in performance tests or
736              benchmarks. To use these verbosity levels  to  thoroughly  track
737              performance  degradations,  consider  wrapping  your test bodies
738              with time commands.
739
740              The single letter abbreviations noted above are also  recognized
741              so  that “configure -verbose pt” is the same as “configure -ver‐
742              bose {pass start}”.
743
744       -preservecore level
745              Sets the core preservation level to level.   This  level  deter‐
746              mines how stringent checks for core files are.  Default value is
747              0.  Levels are defined as:
748
749              0      No checking — do not check for core files at the  end  of
750                     each  test  command, but do check for them in runAllTests
751                     after all test files have been evaluated.
752
753              1      Also check for core files at the end of  each  test  com‐
754                     mand.
755
756              2      Check  for  core  files at all times described above, and
757                     save a copy of  each  core  file  produced  in  configure
758                     -tmpdir.
759
760       -limitconstraints boolean
761              Sets  the  mode by which test honors constraints as described in
762              TESTS above.  Default value is false.
763
764       -constraints list
765              Sets all the constraints in list to true.  Also used in combina‐
766              tion  with configure -limitconstraints true to control an alter‐
767              native constraint mode as described  in  TESTS  above.   Default
768              value is an empty list.
769
770       -tmpdir directory
771              Sets  the temporary directory to be used by makeFile, makeDirec‐
772              tory, viewFile, removeFile, and removeDirectory as  the  default
773              directory  where temporary files and directories created by test
774              files should be created.  Default value is workingDirectory.
775
776       -testdir directory
777              Sets the directory searched by runAllTests for  test  files  and
778              subdirectories.  Default value is workingDirectory.
779
780       -file patternList
781              Sets  the list of patterns used by runAllTests to determine what
782              test files to evaluate.  Default value is “*.test”.
783
784       -notfile patternList
785              Sets the list of patterns used by runAllTests to determine  what
786              test  files  to  skip.  Default value is “l.*.test”, so that any
787              SCCS lock files are skipped.
788
789       -relateddir patternList
790              Sets the list of patterns used by runAllTests to determine  what
791              subdirectories  to search for an all.tcl file.  Default value is
792*”.
793
794       -asidefromdir patternList
795              Sets the list of patterns used by runAllTests to determine  what
796              subdirectories  to  skip  when  searching  for  an all.tcl file.
797              Default value is an empty list.
798
799       -match patternList
800              Set the list of patterns used by test  to  determine  whether  a
801              test should be run.  Default value is “*”.
802
803       -skip patternList
804              Set  the  list  of  patterns used by test to determine whether a
805              test should be skipped.  Default value is an empty list.
806
807       -load script
808              Sets a script to be evaluated  by  loadTestedCommands.   Default
809              value is an empty script.
810
811       -loadfile filename
812              Sets the filename from which to read a script to be evaluated by
813              loadTestedCommands.  This is an alternative to -load.  They can‐
814              not be used together.
815
816       -outfile filename
817              Sets  the file to which all output produced by tcltest should be
818              written.  A file named filename will be opened for writing,  and
819              the resulting channel will be set as the value of outputChannel.
820
821       -errfile filename
822              Sets  the  file  to  which  all error output produced by tcltest
823              should be written.  A file named filename  will  be  opened  for
824              writing,  and  the resulting channel will be set as the value of
825              errorChannel.
826

CREATING TEST SUITES WITH TCLTEST

828       The fundamental element of a test suite is the individual test command.
829       We begin with several examples.
830
831       [1]    Test of a script that returns normally.
832
833                     test example-1.0 {normal return} {
834                         format %s value
835                     } value
836
837       [2]    Test  of a script that requires context setup and cleanup.  Note
838              the bracing and indenting style that avoids any  need  for  line
839              continuation.
840
841                     test example-1.1 {test file existence} -setup {
842                         set file [makeFile {} test]
843                     } -body {
844                         file exists $file
845                     } -cleanup {
846                         removeFile test
847                     } -result 1
848
849       [3]    Test of a script that raises an error.
850
851                     test example-1.2 {error return} -body {
852                         error message
853                     } -returnCodes error -result message
854
855       [4]    Test with a constraint.
856
857                     test example-1.3 {user owns created files} -constraints {
858                         unix
859                     } -setup {
860                         set file [makeFile {} test]
861                     } -body {
862                         file attributes $file -owner
863                     } -cleanup {
864                         removeFile test
865                     } -result $::tcl_platform(user)
866
867       At  the  next  higher  layer of organization, several test commands are
868       gathered together into a single test  file.   Test  files  should  have
869       names  with  the “.test” extension, because that is the default pattern
870       used by runAllTests to find test files.  It is a good rule of thumb  to
871       have  one  test  file for each source code file of your project.  It is
872       good practice to edit the test file and the source code file  together,
873       keeping tests synchronized with code changes.
874
875       Most  of  the  code  in the test file should be the test commands.  Use
876       constraints to skip tests, rather than conditional evaluation of test.
877
878       [5]    Recommended system for writing  conditional  tests,  using  con‐
879              straints to guard:
880
881                     testConstraint X [expr $myRequirement]
882                     test goodConditionalTest {} X {
883                         # body
884                     } result
885
886       [6]    Discouraged  system  for  writing conditional tests, using if to
887              guard:
888
889                     if $myRequirement {
890                         test badConditionalTest {} {
891                             #body
892                         } result
893                     }
894
895       Use the -setup and -cleanup options to establish and release  all  con‐
896       text  requirements of the test body.  Do not make tests depend on prior
897       tests in the file.  Those prior tests might  be  skipped.   If  several
898       consecutive  tests  require the same context, the appropriate setup and
899       cleanup scripts may be stored in variable for  passing  to  each  tests
900       -setup and -cleanup options.  This is a better solution than performing
901       setup outside of test commands, because the setup will only be done  if
902       necessary,  and any errors during setup will be reported, and not cause
903       the test file to abort.
904
905       A test file should be able to be combined with other test files and not
906       interfere with them, even when configure -singleproc 1 causes all files
907       to be evaluated in a common interpreter.  A simple way to achieve  this
908       is  to  have  your  tests  define all their commands and variables in a
909       namespace that is deleted when the test file evaluation is complete.  A
910       good namespace to use is a child namespace test of the namespace of the
911       module you are testing.
912
913       A test file should also be able to be evaluated directly as  a  script,
914       not depending on being called by a master runAllTests.  This means that
915       each test file should process command line arguments to give the tester
916       all the configuration control that tcltest provides.
917
918       After  all  tests  in  a  test file, the command cleanupTests should be
919       called.
920
921       [7]    Here is a sketch  of  a  sample  test  file  illustrating  those
922              points:
923
924                     package require tcltest 2.2
925                     eval ::tcltest::configure $argv
926                     package require example
927                     namespace eval ::example::test {
928                         namespace import ::tcltest::*
929                         testConstraint X [expr {...}]
930                         variable SETUP {#common setup code}
931                         variable CLEANUP {#common cleanup code}
932                         test example-1 {} -setup $SETUP -body {
933                             # First test
934                         } -cleanup $CLEANUP -result {...}
935                         test example-2 {} -constraints X -setup $SETUP -body {
936                             # Second test; constrained
937                         } -cleanup $CLEANUP -result {...}
938                         test example-3 {} {
939                             # Third test; no context required
940                         } {...}
941                         cleanupTests
942                     }
943                     namespace delete ::example::test
944
945       The next level of organization is a full test suite, made up of several
946       test files.  One script is used to control the entire suite.  The basic
947       function  of  this script is to call runAllTests after doing any neces‐
948       sary setup.  This script is usually named all.tcl because that  is  the
949       default  name  used  by runAllTests when combining multiple test suites
950       into one testing run.
951
952       [8]    Here is a sketch of a sample test suite master script:
953
954                     package require Tcl 8.4
955                     package require tcltest 2.2
956                     package require example
957                     ::tcltest::configure -testdir \
958                             [file dirname [file normalize [info script]]]
959                     eval ::tcltest::configure $argv
960                     ::tcltest::runAllTests
961

COMPATIBILITY

963       A number of commands and variables in the ::tcltest namespace  provided
964       by earlier releases of tcltest have not been documented here.  They are
965       no longer part of the supported public interface of tcltest and  should
966       not be used in new test suites.  However, to continue to support exist‐
967       ing test suites written to the older interface specifications, many  of
968       those  deprecated  commands  and  variables  still work as before.  For
969       example, in many circumstances, configure will be automatically  called
970       shortly  after package require tcltest 2.1 succeeds with arguments from
971       the variable ::argv.  This is to support test suites that depend on the
972       old  behavior  that  tcltest  was automatically configured from command
973       line arguments.  New test files should not depend on this,  but  should
974       explicitly include
975
976              eval ::tcltest::configure $::argv
977
978       or
979
980              ::tcltest::configure {*}$::argv
981
982       to establish a configuration from command line arguments.
983

KNOWN ISSUES

985       There  are two known issues related to nested evaluations of test.  The
986       first issue relates to the stack level in which test scripts  are  exe‐
987       cuted.   Tests  nested  within  other tests may be executed at the same
988       stack level as the outermost test.  For example, in the following code:
989
990              test level-1.1 {level 1} {
991                  -body {
992                      test level-2.1 {level 2} {
993                      }
994                  }
995              }
996
997       any script executed in level-2.1 may be  executed  at  the  same  stack
998       level as the script defined for level-1.1.
999
1000       In  addition,  while  two  tests  have  been  run, results will only be
1001       reported by cleanupTests for tests at the same level as test level-1.1.
1002       However,  test  results  for  all  tests run prior to level-1.1 will be
1003       available when test level-2.1 runs.  What this means is that if you try
1004       to access the test results for test level-2.1, it will may say that “m”
1005       tests have run, “n” tests have been skipped, “o” tests have passed  and
1006       “p” tests have failed, where “m”, “n”, “o”, and “p” refer to tests that
1007       were run at the same test level as test level-1.1.
1008
1009       Implementation of output and  error  comparison  in  the  test  command
1010       depends  on  usage  of puts in your application code.  Output is inter‐
1011       cepted by redefining the global puts command  while  the  defined  test
1012       script is being run.  Errors thrown by C procedures or printed directly
1013       from C applications will not be caught by the test command.  Therefore,
1014       usage  of  the  -output and -errorOutput options to test is useful only
1015       for pure Tcl applications that use puts to produce output.
1016

KEYWORDS

1018       test, test harness, test suite
1019
1020
1021
1022tcltest                               2.3                           tcltest(n)
Impressum