1SKIPFISH(1) General Commands Manual SKIPFISH(1)
2
3
4
6 skipfish - web application security scanner
7
9 skipfish [options] -o output-directory [ start-url | @url-file [ start-
10 url2 ... ]]
11
13 skipfish is an active web application security reconnaissance tool. It
14 prepares an interactive sitemap for the targeted site by carrying out a
15 recursive crawl and dictionary-based probes. The resulting map is then
16 annotated with the output from a number of active (but hopefully non-
17 disruptive) security checks. The final report generated by the tool is
18 meant to serve as a foundation for professional web application secu‐
19 rity assessments.
20
22 Authentication and access options:
23 -A user:pass - use specified HTTP authentication credentials
24 -F host=IP - pretend that ´host´ resolves to ´IP´
25 -C name=val - append a custom cookie to all requests
26 -H name=val - append a custom HTTP header to all requests
27 -b (i|f|p) - use headers consistent with MSIE / Firefox / iPhone
28 -N - do not accept any new cookies
29
30 Crawl scope options:
31 -d max_depth - maximum crawl tree depth (16)
32 -c max_child - maximum children to index per node (512)
33 -x max_desc - maximum descendants to index per branch (8192)
34 -r r_limit - max total number of requests to send (100000000)
35 -p crawl% - node and link crawl probability (100%)
36 -q hex - repeat probabilistic scan with given seed
37 -I string - only follow URLs matching ´string´
38 -X string - exclude URLs matching ´string´
39 -K string - do not fuzz parameters named ´string´
40 -D domain - crawl cross-site links to another domain
41 -B domain - trust, but do not crawl, another domain
42 -Z - do not descend into 5xx locations
43 -O - do not submit any forms
44 -P - do not parse HTML, etc, to find new links
45
46 Reporting options:
47 -o dir - write output to specified directory (required)
48 -M - log warnings about mixed content / non-SSL passwords
49 -E - log all caching intent mismatches
50 -U - log all external URLs and e-mails seen
51 -Q - completely suppress duplicate nodes in reports
52 -u - be quiet, disable realtime progress stats
53
54 Dictionary management options:
55 -W wordlist - use a specified read-write wordlist (required)
56 -S wordlist - load a supplemental read-only wordlist
57 -L - do not auto-learn new keywords for the site
58 -Y - do not fuzz extensions in directory brute-force
59 -R age - purge words hit more than ´age´ scans ago
60 -T name=val - add new form auto-fill rule
61 -G max_guess - maximum number of keyword guesses to keep (256)
62
63 Performance settings:
64 -l max_req - max requests per second (0..000000)
65 -g max_conn - max simultaneous TCP connections, global (40)
66 -m host_conn - max simultaneous connections, per target IP (10)
67 -f max_fail - max number of consecutive HTTP errors (100)
68 -t req_tmout - total request response timeout (20 s)
69 -w rw_tmout - individual network I/O timeout (10 s)
70 -i idle_tmout - timeout on idle HTTP connections (10 s)
71 -s s_limit - response size limit (200000 B)
72 -e - do not keep binary responses for reporting
73
74 Other settings:
75 -k duration - stop scanning after the given duration h:m:s
76 --config file - load specified configuration file
77
78
80 Some sites require authentication, and skipfish supports this in dif‐
81 ferent ways. First there is basic HTTP authentication, for which you
82 can use the -A flag. Second, and more common, are sites that require
83 authentication on a web application level. For these sites, the best
84 approach is to capture authenticated session cookies and provide them
85 to skipfish using the -C flag (multiple if needed). Last, you'll need
86 to put some effort in protecting the session from being destroyed by
87 excluding logout links with -X and/or by rejecting new cookies with -N.
88
89
90 -F/--host <ip:hostname>
91 Using this flag, you can set the ´Host:´ header value to define
92 a custom mapping between a host and an IP (bypassing the
93 resolver). This feature is particularly useful for not-yet-
94 launched or legacy services that don't have the necessary DNS
95 entries.
96
97
98 -H/--header <header:value>
99 When it comes to customizing your HTTP requests, you can also
100 use the -H option to insert any additional, non-standard head‐
101 ers. This flag also allows the default headers to be overwrit‐
102 ten.
103
104
105 -C/--cookie <cookie:value>
106 This flag can be used to add a cookie to the skipfish HTTP
107 requests; This is particularly useful to perform authenticated
108 scans by providing session cookies. When doing so, keep in mind
109 that cetain URLs (e.g. /logout) may destroy your session; you
110 can combat this in two ways: by using the -N option, which
111 causes the scanner to reject attempts to set or delete cookies;
112 or by using the -X option to exclude logout URLs.
113
114
115 -b/--user-agent <i|f|p>
116 This flag allows the user-agent to be specified where ´i´ stands
117 for Internet Explorer, ´f´ for Firefox and ´p´ for iPhone. Using
118 this flag is recommended in case the target site shows different
119 behavior based on the user-agent (e.g some sites use different
120 templates for mobiles and desktop clients).
121
122
123 -N/--reject-cookies
124 This flag causes skipfish to ignore cookies that are being set
125 by the site. This helps to enforce stateless tests and also pre‐
126 vent that cookies set with ´-C´ are not overwritten.
127
128
129 -A/--auth <username:password>
130 For sites requiring basic HTTP authentication, you can use this
131 flag to specify your credentials.
132
133
134 --auth-form <URL>
135 The login form to use with form authentication. By default skip‐
136 fish will use the form's action URL to submit the credentials.
137 If this is missing than the login data is send to the form URL.
138 In case that is wrong, you can set the form handler URL with
139 --auth-form-target <URL> .
140
141
142 --auth-user <username>
143 The username to be used during form authentication. Skipfish
144 will try to detect the correct form field to use but if it fails
145 to do so (and gives an error), then you can specify the form
146 field name with --auth-user-field.
147
148
149 --auth-pass <password>
150 The password to be used during form authentication. Similar to
151 auth-user, the form field name can (optionally) be set with
152 --auth-pass-field.
153
154
155 --auth-verify-url <URL>
156 This URL allows skipfish to verify whether authentication was
157 successful. This requires a URL where anonymous and authenti‐
158 cated requests are answered with a different response.
159
160
161
163 Some sites may be too big to scan in a reasonable timeframe. If the
164 site features well-defined tarpits - for example, 100,000 nearly iden‐
165 tical user profiles as a part of a social network - these specific
166 locations can be excluded with -X or -S. In other cases, you may need
167 to resort to other settings: -d limits crawl depth to a specified num‐
168 ber of subdirectories; -c limits the number of children per directory;
169 -x limits the total number of descendants per crawl tree branch; and -r
170 limits the total number of requests to send in a scan.
171
172
173 -d/--max-crawl-depth <depth>
174 Limit the depth of subdirectories being crawled (see above).
175
176 -c/--max-crawl-child <childs>
177 Limit the amount of subdirectories per directory we crawl into
178 (see above).
179
180 -x/--max-crawl-descendants <descendants>
181 Limit the total number of descendants per crawl tree branch (see
182 above).
183
184 -r/--max-request-total <request>
185 The maximum number of requests can be limited with this flag.
186
187 -p/--crawl-probability <0-100>
188 By specifying a percentage between 1 and 100%, it is possible to
189 tell the crawler to follow fewer than 100% of all links, and try
190 fewer than 100% of all dictionary entries. This - naturally -
191 limits the completeness of a scan, but unlike most other set‐
192 tings, it does so in a balanced, non-deterministic manner. It is
193 extremely useful when you are setting up time-bound, but peri‐
194 odic assessments of your infrastructure.
195
196 -q/--seed <seed>
197 This flag sets the initial random seed for the crawler to a
198 specified value. This can be used to exactly reproduce a previ‐
199 ous scan to compare results. Randomness is relied upon most
200 heavily in the -p mode, but also influences a couple of other
201 scan management decisions.
202
203
204 -I/--include-string <domain/path>
205 With this flag, you can tell skipfish to only crawl and test
206 URLs that match a certain string. This can help to narrow down
207 the scope of a scan by only whitelisting certain sections of a
208 web site (e.g. -I /shop).
209
210
211 -X/--exclude-string <domain/path>
212 The -X option can be used to exclude files / directories from
213 the scan. This is useful to avoid session termination (i.e. by
214 excluding /logout) or just for speeding up your scans by exclud‐
215 ing static content directories like /icons/, /doc/, /manuals/,
216 and other standard, mundane locations along these lines.
217
218
219 -K/--skip-parameter <parameter name>
220 This flag allows you to specify parameter names not to fuzz.
221 (useful for applications that put session IDs in the URL, to
222 minimize noise).
223
224
225 -D/--include-domain <domain>
226 Allows you to specify additional hosts or domains to be in-scope
227 for the test. By default, all hosts appearing in the command-
228 line URLs are added to the list - but you can use -D to broaden
229 these rules. The result of this will be that the crawler will
230 follow links and tests links that point to these additional
231 hosts.
232
233
234 -B/--trust-domain <domain>
235 In some cases, you do not want to actually crawl a third-party
236 domain, but you trust the owner of that domain enough not to
237 worry about cross-domain content inclusion from that location.
238 To suppress warnings, you can use the -B option
239
240
241 -Z/--skip-error-pages
242 Do not crawl into pages / directories that give an error 5XX.
243
244
245 -O/--no-form-submits
246 Using this flag will cause forms to be ignored during the scan.
247
248
249 -P/--no-html-parsing
250 This flag will disable link extracting and effectively disables
251 crawling. Using -P is useful when you want to test one specific
252 URL or when you want to feed skipfish a list of URLs that were
253 collected with an external crawler.
254
255
257 --checks
258 EXPERIMENTAL: Displays the crawler injection tests. The output
259 shows the index number (useful for --checks-toggle), the check
260 name and whether the check is enabled.
261
262
263 --checks-toggle <check1,check2,..>
264 EXPERIMENTAL: Every injection test can be enabled/disabled with
265 using this flag. As value, you need to provide the check numbers
266 which can be obtained with the --checks flag. Multiple checks
267 can be toggled via a comma separated value (i.e. --checks-toggle
268 1,2 )
269
270
271 --no-injection-tests
272 EXPERIMENTAL: Disables all injection tests for this scan and
273 limits the scan to crawling and, optionally, bruteforcing. As
274 with all scans, the output directory will contain a pivots.txt
275 file. This file can be used to feed future scans.
276
277
279 -o/--output <dir>
280 The report wil be written to this location. The directory is one
281 of the two mandatory options and must not exist upon starting
282 the scan.
283
284
285 -M/--log-mixed-content
286 Enable the logging of mixed content. This is highly recommended
287 when scanning SSL-only sites to detect insecure content inclu‐
288 sion via non-SSL protected links.
289
290
291 -E/--log-cache-mismatches
292 This will cause additonal content caching error to be reported.
293
294
295 -U/--log-external-urls
296 Log all external URLs and email addresses that were seen during
297 the scan.
298
299
300 -Q/--log-unique-nodes
301 Enable this to completely suppress duplicate nodes in reports.
302
303
304 -u/--quiet
305 This will cause skipfish to suppress all console output during
306 the scan.
307
308
309 -v/--verbose
310 EXPERIMENTAL: Use this flag to enable runtime reporting of, for
311 example, problems that are detected. Can be used multiple times
312 to increase verbosity and should be used in combination with -u
313 unless you run skipfish with stderr redirected to a file.
314
315
317 Make sure you've read the instructions provided in doc/dictionaries.txt
318 to select the right dictionary file and configure it correctly. This
319 step has a profound impact on the quality of scan results later on.
320
321
322 -S/--wordlist <file>
323 Load the specified (read-only) wordlist for use during the scan.
324 This flag is optional but use of a dictionary is highly recom‐
325 mended when performing a blackbox scan as it will highlight hid‐
326 den files and directories.
327
328
329 -W/--rw-wordlist <file>
330 Specify an initially empty file for any newly learned site-spe‐
331 cific keywords (which will come handy in future assessments).
332 You can use -W- or -W /dev/null if you don't want to store auto-
333 learned keywords anywhere. Typically you will want to use one of
334 the packaged dictonaries (i.e. complete.wl) and possibly add a
335 custom dictionary.
336
337
338 -L/--no-keyword-learning
339 During the scan, skipfish will try to learn and use new key‐
340 words. This flag disables that behavior and should be used when
341 any form of brute-forcing is not desired.
342
343
344 -Y/--no-extension-brute
345 This flag will disable extension guessing during directory
346 bruteforcing.
347
348
349 -R <age>
350 Use of this flag allows old words to be purged from wordlists.
351 It is intended to help keeping dictionaries clean when used in
352 recurring scans.
353
354
355 -T/--form-value <name=value>
356 Skipfish also features a form auto-completion mechanism in order
357 to maximize scan coverage. The values should be non-malicious,
358 as they are not meant to implement security checks - but rather,
359 to get past input validation logic. You can define additional
360 rules, or override existing ones, with the -T option (-T
361 form_field_name=field_value, e.g. -T login=test123 -T pass‐
362 word=test321 - although note that -C and -A are a much better
363 method of logging in).
364
365
366 -G <max guesses>
367 During the scan, a temporary buffer of newly detected keywords
368 is maintained. The size of this buffer can be changed with this
369 flag and doing so influences bruteforcing.
370
371
373 The default performance setting should be fine for most servers but
374 when the report indicates there were connection problems, you might
375 want to tweak some of the values here. For unstable servers, the scan
376 coverage is likely to improve when using low values for rate and con‐
377 nection flags.
378
379
380 -l/--max-request-rate <rate>
381 This flag can be used to limit the amount of requests per sec‐
382 ond. This is very useful when the target server can't keep up
383 with the high amount of requests that are generated by skipfish.
384 Keeping the amount requests per second low can also help pre‐
385 venting some rate-based DoS protection mechanisms from kicking
386 in and ruining the scan.
387
388
389 -g/--max-connections <number>
390 The max simultaneous TCP connections (global) can be set with
391 this flag.
392
393
394 -m/--max-host-connections <number>
395 The max simultaneous TCP connections, per target IP, can be set
396 with this flag.
397
398
399 -f/--max-failed-requests <number>
400 Controls the maximum number of consecutive HTTP errors you are
401 willing to see before aborting the scan. For large scans, you
402 probably want to set a higher value here.
403
404
405 -t/--request-timeout <timeout>
406 Set the total request timeout, to account for really slow or
407 really fast sites.
408
409
410 -w/--network-timeout <timeout>
411 Set the network I/O timeout.
412
413
414 -i/--idle-timeout <timeout>
415 Specify the timeout for idle HTTP connections.
416
417
418 -s/--response-size <size>
419 Sets the maximum length of a response to fetch and parse (longer
420 responses will be truncated).
421
422
423 -e/--discard-binary
424 This prevents binary documents from being kept in memory for
425 reporting purposes, and frees up a lot of RAM.
426
427
428 --flush-to-disk
429 This causes request / response data to be flushed to disk
430 instead of being kept in memory. As a result, the memory usage
431 for large scans will be significant lower.
432
433
435 Scan type: config
436 skipfish --config config/example.conf http://example.com
437
438 Scan type: quick
439 skipfish -o output/dir/ http://example.com
440
441 Scan type: extensive bruteforce
442 skipfish [...other options..] -S dictionaries/complete.wl http://exam‐
443 ple.com
444
445 Scan type: without bruteforcing
446 skipfish [...other options..] -LY http://example.com
447
448 Scan type: authenticated (basic)
449 skipfish [...other options..] -A username:password http://example.com
450
451 Scan type: authenticated (cookie)
452 skipfish [...other options..] -C jsession=myauthcookiehere -X /logout
453 http://example.com
454
455 Scan type: flaky server
456 skipfish [...other options..] -l 5 -g 2 -t 30 -i 15 http://example.com
457
458
460 The default values for all flags can be viewed by running ´./skipfish
461 -h´ .
462
463
465 skipfish was written by Michal Zalewski <lcamtuf@google.com>, with con‐
466 tributions from Niels Heinen <heinenn@google.com>, Sebastian Roschke
467 <s.roschke@googlemail.com>, and other parties.
468
469 This manual page was written with the help of Thorsten Schifferdecker
470 <tsd@debian.systs.org>.
471
472
473
474 May 6, 2012 SKIPFISH(1)