]>
Commit | Line | Data |
---|---|---|
427c49bc A |
1 | regression test suite for security components. |
2 | by Michael Brouwer | |
3 | ||
4 | ||
5 | GOALS | |
6 | ===== | |
7 | ||
8 | The goals of this test setup are to have something that required | |
9 | 0 configuration and setup and allows developers to quickly write | |
10 | new standalone test cases. | |
11 | ||
12 | ||
13 | USAGE | |
14 | ===== | |
15 | ||
16 | The tests are runnable from the top level Makefile by typing: | |
17 | make test | |
18 | or individually from the command line or with gdb. Tests will be | |
19 | built into a directory called build by default or into LOCAL_BUILD_DIR | |
20 | if you set that in your environment. | |
21 | ||
22 | ||
23 | DIRECTORY LAYOUT | |
24 | ================ | |
25 | ||
26 | Currently there are subdirectories for a number of different parts | |
27 | of the security stack. Each directory contains some of the unit | |
28 | tests I've managed to find from radar and other places. | |
29 | ||
30 | The test programs output their results in a format called TAP. This | |
31 | is described here: | |
32 | http://search.cpan.org/~petdance/Test-Harness-2.46/lib/Test/Harness/TAP.pod | |
33 | Because of this we can use Perl's Test::Harness to run the tests | |
34 | and produce some nice looking output without the need to write an | |
35 | entire test harness. | |
36 | ||
37 | Tests can be written in C, C++ or Objective-C or perl (using | |
38 | Test::More in perl). | |
39 | ||
40 | ||
41 | WRITING TESTS | |
42 | ============= | |
43 | ||
44 | To add a new test simply copy one of the existing ones and hack away. | |
45 | Any file with a main() function in it will be built into a test | |
46 | automatically by the top level Makefile (no configuration required). | |
47 | ||
48 | To use the testmore C "library" all you need to do is #include | |
49 | "testmore.h" in your test program. | |
50 | ||
51 | Then in your main function you must call: | |
52 | ||
53 | plan_tests(NUMTESTS) where NUMTESTS is the number of test cases you | |
54 | test program will run. After that you can start writing tests. | |
55 | There are a couple of macros to help you get started: | |
56 | ||
57 | The following are ways to run an actual test case (as in they count | |
58 | towards the NUMTESTS number above): | |
59 | ||
60 | ok(EXPR, NAME) | |
61 | Evaluate EXPR if it's true the test passes if false it fails. | |
62 | The second argument is a descriptive name of the test for debugging | |
63 | purposes. | |
64 | ||
65 | is(EXPR, VALUE, NAME) | |
66 | Evaluate EXPR if it's equal to VALUE the test passes otherwise | |
67 | it fails. This is equivalent to ok(EXPR == VALUE, NAME) except | |
68 | this produces nicer output in a failure case. | |
69 | CAVEAT: Currently EXPR and VALUE must both be type int. | |
70 | ||
71 | isnt(EXPR, VALUE, NAME) | |
72 | Opposite of is() above. | |
73 | CAVEAT: Currently EXPR and VALUE must both be type int. | |
74 | ||
75 | cmp_ok(EXPR, OP, VALUE, NAME) | |
76 | Succeeds if EXPR OP VALUE is true. Produces a diagnostic if not. | |
77 | CAVEAT: Currently EXPR and VALUE must both be type int. | |
78 | ||
79 | ok_status(EXPR, NAME) | |
80 | Evaluate EXPR, if it's 0 the test passes otherwise print a | |
81 | diagnostic with the name and number of the error returned. | |
82 | ||
83 | is_status(EXPR, VALUE, NAME) | |
84 | Evaluate EXPR, if the error returned equals VALUE the test | |
85 | passes, otherwise print a diagnostic with the expected and | |
86 | actual error returned. | |
87 | ||
88 | ok_unix(EXPR, NAME) | |
89 | Evaluate EXPR, if it's >= 0 the test passes otherwise print a | |
90 | diagnostic with the name and number of the errno. | |
91 | ||
92 | is_unix(EXPR, VALUE, NAME) | |
93 | Evaluate EXPR, if the errno set by it equals VALUE the test | |
94 | passes, otherwise print a diagnostic with the expected and | |
95 | actual errno. | |
96 | ||
97 | Finally if you somehow can't express the success or failure of a | |
98 | test using the macros above you can use pass(NAME) or fail(NAME) | |
99 | explicitly. These are equivalent to ok(1, NAME) and ok(0, NAME) | |
100 | respectively. | |
101 | ||
102 | ||
103 | LEAKS | |
104 | ===== | |
105 | ||
106 | If you want to check for leaks in your test you can #include | |
107 | "testleaks.h" in your program and call: | |
108 | ||
109 | ok_leaks(NAME) | |
110 | Passes if there are no leaks in your program. | |
111 | ||
112 | is_leaks(VALUE, NAME) | |
113 | Passes if there are exactly VALUE leaks in your program. Useful | |
114 | if you are calling code that is known to leak and you can't fix | |
115 | it. But you still want to make sure there are no new leaks in | |
116 | your code. | |
117 | ||
118 | ||
119 | C++ | |
120 | === | |
121 | ||
122 | For C++ programs you can #include "testcpp.h" which defines these | |
123 | additional macros: | |
124 | no_throw(EXPR, NAME) | |
125 | Success if EXPR doesn't throw. | |
126 | ||
127 | does_throw(EXPR, NAME) | |
128 | Success if EXPR does throw. | |
129 | ||
130 | is_throw(EXPR, CLASS, FUNC, VALUE, NAME) | |
131 | Success if EXPR throws an exception of type CLASS and CLASS.FUNC == VALUE. | |
132 | Example usage: | |
133 | is_throw(CssmError::throwMe(42), CssmError, osStatus(), 42, "throwMe(42)"); | |
134 | ||
135 | ||
136 | TODO and SKIP | |
137 | ============= | |
138 | ||
139 | Sometimes you write a test case that is known to fail (because you | |
140 | found a bug). Rather than commenting out that test case you should | |
141 | put it inside a TODO block. This will cause the test to run but | |
142 | the failure will not be reported as an error. When the test starts | |
143 | passing (presumably because someone fixed the bug) you can comment | |
144 | out the TODO block and leave the test in place. | |
145 | ||
146 | The syntax for doing this looks like so: | |
147 | ||
148 | TODO: { | |
149 | todo("<rdar://problem/4000000> ER: AAPL target: $4,000,000/share"); | |
150 | ||
151 | cmp_ok(apple_stock_price(), >=, 4000000, "stock over 4M"); | |
152 | } | |
153 | ||
154 | Sometimes you don't want to run a particular test case or set of | |
155 | test cases because something in the environment is missing or you | |
156 | are running on a different version of the OS than the test was | |
157 | designed for. To achieve this you can use a SKIP block. | |
158 | ||
159 | The syntax for a SKIP block looks like so: | |
160 | ||
161 | SKIP: { | |
162 | skip("only runs on Tiger and later", 4, os_version() >= os_tiger); | |
163 | ||
164 | ok(tiger_test1(), "test1"); | |
165 | ok(tiger_test2(), "test2"); | |
166 | ok(tiger_test3(), "test3"); | |
167 | ok(tiger_test4(), "test4"); | |
168 | } | |
169 | ||
170 | How it works is like so: If the third argument to skip evaluates | |
171 | to false it breaks out of the SKIP block and reports N tests as | |
172 | being skipped (where N is the second argument to skip) The reason | |
173 | for the test being skipped is given as the first argument to skip. | |
174 | ||
175 | ||
176 | Utility Functions | |
177 | ================= | |
178 | ||
179 | Anyone writing tests can add new utility functions. Currently there | |
180 | is a pair called tests_begin and tests_end. To get them | |
181 | #include "testenv.h". Calling them doesn't count as running a test | |
182 | case, unless you wrap them in an ok() macro. tests_begin creates | |
183 | a unique dir in /tmp and sets HOME in the environment to that dir. | |
184 | tests_end cleans up the mess that tests_begin made. | |
185 | ||
186 | When writing your own utility functions you will probably want to use | |
187 | the setup("task") macro so that any uses of ok() and other macros | |
188 | don't count as actual test cases run, but do report errors when they | |
189 | fail. Here is an example of how tests_end() does this: | |
190 | ||
191 | int | |
192 | tests_end(int result) | |
193 | { | |
194 | setup("tests_end"); | |
195 | /* Restore previous cwd and remove scratch dir. */ | |
196 | return (ok_unix(fchdir(current_dir), "fchdir") && | |
197 | ok_unix(close(current_dir), "close") && | |
198 | ok_unix(rmdir_recursive(scratch_dir), "rmdir_recursive")); | |
199 | } | |
200 | ||
201 | Setup cases all tests unil the end of the current funtion to not count | |
202 | against your test cases test count and they output nothing if they | |
203 | succeed. | |
204 | ||
205 | There is also a simple utility header called "testcssm.h" which | |
206 | currently defines cssm_attach and cssm_detach functions for loading | |
207 | and initializing cssm and loading a module. |