1ATF-TEST-CASE(4) BSD Kernel Interfaces Manual ATF-TEST-CASE(4)
2
4 atf-test-case — generic description of test cases
5
7 A test case is a piece of code that stress-tests a specific feature of
8 the software. This feature is typically self-contained enough, either in
9 the amount of code that implements it or in the general idea that de‐
10 scribes it, to warrant its independent testing. Given this, test cases
11 are very fine-grained, but they attempt to group similar smaller tests
12 which are semantically related.
13
14 A test case is defined by three components regardless of the language it
15 is implemented in: a header, a body and a cleanup routine. The header
16 is, basically, a declarative piece of code that defines several proper‐
17 ties to describe what the test case does and how it behaves. In other
18 words: it defines the test case's meta-data, further described in the
19 Meta-data section. The body is the test case itself. It executes all
20 actions needed to reproduce the test, and checks for failures. This body
21 is only executed if the abstract conditions specified by the header are
22 met. The cleanup routine is a piece of code always executed after the
23 body, regardless of the exit status of the test case. It can be used to
24 undo side-effects of the test case. Note that almost all side-effects of
25 a test case are automatically cleaned up by the library; this is ex‐
26 plained in more detail in the rest of this document.
27
28 It is extremely important to keep the separation between a test case's
29 header and body well-defined, because the header is always parsed,
30 whereas the body is only executed when the conditions defined in the
31 header are met and when the user specifies that test case.
32
33 At last, test cases are always contained into test programs. The test
34 programs act as a front-end to them, providing a consistent interface to
35 the user and several APIs to ease their implementation.
36
37 Results
38 Upon termination, a test case reports a status and, optionally, a textual
39 reason describing why the test reported such status. The caller must en‐
40 sure that the test case really performed the task that its status de‐
41 scribes, as the test program may be bogus and therefore providing a mis‐
42 leading result (e.g. providing a result that indicates success but the
43 error code of the program says otherwise).
44
45 The possible exit status of a test case are one of the following:
46
47 expected_death The test case expects to terminate abruptly.
48
49 expected_exit The test case expects to exit cleanly.
50
51 expected_failure The test case expects to exit with a controller fa‐
52 tal/non-fatal failure. If this happens, the test
53 program exits with a success error code.
54
55 expected_signal The test case expects to receive a signal that makes
56 it terminate.
57
58 expected_timeout The test case expects to execute for longer than its
59 timeout.
60
61 passed The test case was executed successfully. The test
62 program exits with a success error code.
63
64 skipped The test case could not be executed because some pre‐
65 conditions were not met. This is not a failure be‐
66 cause it can typically be resolved by adjusting the
67 system to meet the necessary conditions. This is al‐
68 ways accompanied by a reason, a message describing
69 why the test was skipped. The test program exits
70 with a success error code.
71
72 failed An error appeared during the execution of the test
73 case. This is always accompanied by a reason, a mes‐
74 sage describing why the test failed. The test pro‐
75 gram exits with a failure error code.
76
77 The usefulness of the ‘expected_*’ results comes when writing test cases
78 that verify known failures caused, in general, due to programming errors
79 (aka bugs). Whenever the faulty condition that the ‘expected_*’ result
80 is trying to cover is fixed, then the test case will be reported as
81 ‘failed’ and the developer will have to adjust it to match its new condi‐
82 tion.
83
84 It is important to note that all ‘expected_*’ results are only provided
85 as a hint to the caller; the caller must verify that the test case did
86 actually terminate as the expected condition says.
87
88 Input/output
89 Test cases are free to print whatever they want to their stdout(4) and
90 stderr(4) file descriptors. They are, in fact, encouraged to print sta‐
91 tus information as they execute to keep the user informed of their ac‐
92 tions. This is specially important for long test cases.
93
94 Test cases will log their results to an auxiliary file, which is then
95 collected by the test program they are contained in. The developer need
96 not care about this as long as he uses the correct APIs to implement the
97 test cases.
98
99 The standard input of the test cases is unconditionally connected to
100 ‘/dev/zero’.
101
102 Meta-data
103 The following list describes all meta-data properties interpreted inter‐
104 nally by ATF. You are free to define new properties in your test cases
105 and use them as you wish, but non-standard properties must be prefixed by
106 ‘X-’.
107
108 descr Type: textual. Required.
109
110 A brief textual description of the test case's pur‐
111 pose. Will be shown to the user in reports. Also
112 good for documentation purposes.
113
114 has.cleanup Type: boolean. Optional.
115
116 If set to true, specifies that the test case has a
117 cleanup routine that has to be executed by the runtime
118 engine during the cleanup phase of the execution.
119 This property is automatically set by the framework
120 when defining a test case with a cleanup routine, so
121 it should never be set by hand.
122
123 ident Type: textual. Required.
124
125 The test case's identifier. Must be unique inside the
126 test program and should be short but descriptive.
127
128 require.arch Type: textual. Optional.
129
130 A whitespace separated list of architectures that the
131 test case can be run under without causing errors due
132 to an architecture mismatch.
133
134 require.config Type: textual. Optional.
135
136 A whitespace separated list of configuration variables
137 that must be defined to execute the test case. If any
138 of the required variables is not defined, the test
139 case is skipped.
140
141 require.diskspace Type: integer. Optional. Specifies the minimum
142 amount of available disk space needed by the test.
143 The value can have a size suffix such as ‘K’, ‘M’, ‘G’
144 or ‘T’ to make the amount of bytes easier to type and
145 read.
146
147 require.files Type: textual. Optional.
148
149 A whitespace separated list of files that must be
150 present to execute the test case. The names of these
151 files must be absolute paths. If any of the required
152 files is not found, the test case is skipped.
153
154 require.machine Type: textual. Optional.
155
156 A whitespace separated list of machine types that the
157 test case can be run under without causing errors due
158 to a machine type mismatch.
159
160 require.memory Type: integer. Optional. Specifies the minimum
161 amount of physical memory needed by the test. The
162 value can have a size suffix such as ‘K’, ‘M’, ‘G’ or
163 ‘T’ to make the amount of bytes easier to type and
164 read.
165
166 require.progs Type: textual. Optional.
167
168 A whitespace separated list of programs that must be
169 present to execute the test case. These can be given
170 as plain names, in which case they are looked in the
171 user's PATH, or as absolute paths. If any of the re‐
172 quired programs is not found, the test case is
173 skipped.
174
175 require.user Type: textual. Optional.
176
177 The required privileges to execute the test case. Can
178 be one of ‘root’ or ‘unprivileged’.
179
180 If the test case is running as a regular user and this
181 property is ‘root’, the test case is skipped.
182
183 If the test case is running as root and this property
184 is ‘unprivileged’, the runtime engine will automati‐
185 cally drop the privileges if the ‘unprivileged-user’
186 configuration property is set; otherwise the test case
187 is skipped.
188
189 timeout Type: integral. Optional; defaults to ‘300’.
190
191 Specifies the maximum amount of time the test case can
192 run. This is particularly useful because some tests
193 can stall either because they are incorrectly coded or
194 because they trigger an anomalous behavior of the pro‐
195 gram. It is not acceptable for these tests to stall
196 the whole execution of the test program.
197
198 Can optionally be set to zero, in which case the test
199 case has no run-time limit. This is discouraged.
200
201 Environment
202 Every time a test case is executed, several environment variables are
203 cleared or reseted to sane values to ensure they do not make the test
204 fail due to unexpected conditions. These variables are:
205
206 HOME Set to the work directory's path.
207
208 LANG Undefined.
209
210 LC_ALL Undefined.
211
212 LC_COLLATE Undefined.
213
214 LC_CTYPE Undefined.
215
216 LC_MESSAGES Undefined.
217
218 LC_MONETARY Undefined.
219
220 LC_NUMERIC Undefined.
221
222 LC_TIME Undefined.
223
224 TZ Hardcoded to ‘UTC’.
225
226 Work directories
227 The test program always creates a temporary directory and switches to it
228 before running the test case's body. This way the test case is free to
229 modify its current directory as it wishes, and the runtime engine will be
230 able to clean it up later on in a safe way, removing any traces of its
231 execution from the system. To do so, the runtime engine will perform a
232 recursive removal of the work directory without crossing mount points; if
233 a mount point is found, the file system will be unmounted (if possible).
234
235 File creation mode mask (umask)
236 Test cases are always executed with a file creation mode mask (umask) of
237 ‘0022’. The test case's code is free to change this during execution.
238
240 atf-test-program(1)
241
242BSD October 5, 2014 BSD