1VARNISHD(1) VARNISHD(1)
2
3
4
6 varnishd - HTTP accelerator daemon
7
9 varnishd
10 [-a [name=][listen_address[,PROTO]] [-b [host[:port]|path]] [-C]
11 [-d] [-F] [-f config] [-h type[,options]] [-I clifile] [-i iden‐
12 tity] [-j jail[,jailoptions]] [-l vsl] [-M address:port] [-n
13 workdir] [-P file] [-p param=value] [-r param[,param...]] [-S
14 secret-file] [-s [name=]kind[,options]] [-T address[:port]] [-t
15 TTL] [-V] [-W waiter]
16
17 varnishd [-x parameter|vsl|cli|builtin|optstring]
18
19 varnishd [-?]
20
22 The varnishd daemon accepts HTTP requests from clients, passes them on
23 to a backend server and caches the returned documents to better satisfy
24 future requests for the same document.
25
27 Basic options
28 -a <[name=][listen_address[,PROTO]]>
29 Accept for client requests on the specified listen_address (see
30 below).
31
32 Name is referenced in logs. If name is not specified, "a0",
33 "a1", etc. is used.
34
35 PROTO can be "HTTP" (the default) or "PROXY". Both version 1
36 and 2 of the proxy protocol can be used.
37
38 Multiple -a arguments are allowed.
39
40 If no -a argument is given, the default -a :80 will listen to
41 all IPv4 and IPv6 interfaces.
42
43 -a <[name=][ip_address][:port][,PROTO]>
44 The ip_address can be a host name ("localhost"), an IPv4 dot‐
45 ted-quad ("127.0.0.1") or an IPv6 address enclosed in square
46 brackets ("[::1]")
47
48 If port is not specified, port 80 (http) is used.
49
50 At least one of ip_address or port is required.
51
52 -a <[name=][path][,PROTO][,user=name][,group=name][,mode=octal]>
53 (VCL4.1 and higher)
54
55 Accept connections on a Unix domain socket. Path must be abso‐
56 lute ("/path/to/listen.sock") or "@" followed by the name of an
57 abstract socket ("@myvarnishd").
58
59 The user, group and mode sub-arguments may be used to specify
60 the permissions of the socket file -- use names for user and
61 group, and a 3-digit octal value for mode. These sub-arguments
62 do not apply to abstract sockets.
63
64 -b <[host[:port]|path]>
65 Use the specified host as backend server. If port is not speci‐
66 fied, the default is 8080.
67
68 If the value of -b begins with /, it is interpreted as the abso‐
69 lute path of a Unix domain socket to which Varnish connects. In
70 that case, the value of -b must satisfy the conditions required
71 for the .path field of a backend declaration, see vcl(7). Back‐
72 ends with Unix socket addresses may only be used with VCL ver‐
73 sions >= 4.1.
74
75 -b can be used only once, and not together with f.
76
77 -f config
78 Use the specified VCL configuration file instead of the builtin
79 default. See vcl(7) for details on VCL syntax.
80
81 If a single -f option is used, then the VCL instance loaded from
82 the file is named "boot" and immediately becomes active. If more
83 than one -f option is used, the VCL instances are named "boot0",
84 "boot1" and so forth, in the order corresponding to the -f argu‐
85 ments, and the last one is named "boot", which becomes active.
86
87 Either -b or one or more -f options must be specified, but not
88 both, and they cannot both be left out, unless -d is used to
89 start varnishd in debugging mode. If the empty string is speci‐
90 fied as the sole -f option, then varnishd starts without start‐
91 ing the worker process, and the management process will accept
92 CLI commands. You can also combine an empty -f option with an
93 initialization script (-I option) and the child process will be
94 started if there is an active VCL at the end of the initializa‐
95 tion.
96
97 When used with a relative file name, config is searched in the
98 vcl_path. It is possible to set this path prior to using -f op‐
99 tions with a -p option. During startup, varnishd doesn't com‐
100 plain about unsafe VCL paths: unlike the varnish-cli(7) that
101 could later be accessed remotely, starting varnishd requires lo‐
102 cal privileges.
103
104 -n workdir
105 Runtime directory for the shared memory, compiled VCLs etc.
106
107 In performance critical applications, this directory should be
108 on a RAM backed filesystem.
109
110 Relative paths will be appended to /var/run/ (NB: Binary pack‐
111 ages of Varnish may have adjusted this to the platform.)
112
113 The default value is /var/run/varnishd (NB: as above.)
114
115 Documentation options
116 For these options, varnishd prints information to standard output and
117 exits. When a -x option is used, it must be the only option (it outputs
118 documentation in reStructuredText, aka RST).
119
120 -?
121 Print the usage message.
122
123 -x parameter
124 Print documentation of the runtime parameters (-p options), see
125 List of Parameters.
126
127 -x vsl Print documentation of the tags used in the Varnish shared mem‐
128 ory log, see vsl(7).
129
130 -x cli Print documentation of the command line interface, see var‐
131 nish-cli(7).
132
133 -x builtin
134 Print the contents of the default VCL program builtin.vcl.
135
136 -x optstring
137 Print the optstring parameter to getopt(3) to help writing wrap‐
138 per scripts.
139
140 Operations options
141 -F Do not fork, run in the foreground. Only one of -F or -d can be
142 specified, and -F cannot be used together with -C.
143
144 -T <address[:port]>
145 Offer a management interface on the specified address and port.
146 See varnish-cli(7) for documentation of the management commands.
147 To disable the management interface use none.
148
149 -M <address:port>
150 Connect to this port and offer the command line interface.
151 Think of it as a reverse shell. When running with -M and there
152 is no backend defined the child process (the cache) will not
153 start initially.
154
155 -P file
156 Write the PID of the process to the specified file.
157
158 -i identity
159 Specify the identity of the Varnish server. This can be accessed
160 using server.identity from VCL.
161
162 The server identity is used for the received-by field of Via
163 headers generated by Varnish. For this reason, it must be a
164 valid token as defined by the HTTP grammar.
165
166 If not specified the output of gethostname(3) is used, in which
167 case the syntax is assumed to be correct.
168
169 -I clifile
170 Execute the management commands in the file given as clifile be‐
171 fore the the worker process starts, see CLI Command File.
172
173 Tuning options
174 -t TTL Specifies the default time to live (TTL) for cached objects.
175 This is a shortcut for specifying the default_ttl run-time pa‐
176 rameter.
177
178 -p <param=value>
179 Set the parameter specified by param to the specified value, see
180 List of Parameters for details. This option can be used multiple
181 times to specify multiple parameters.
182
183 -s <[name=]type[,options]>
184 Use the specified storage backend. See Storage Backend section.
185
186 This option can be used multiple times to specify multiple stor‐
187 age files. Name is referenced in logs, VCL, statistics, etc. If
188 name is not specified, "s0", "s1" and so forth is used.
189
190 -l <vsl>
191 Specifies size of the space for the VSL records, shorthand for
192 -p vsl_space=<vsl>. Scaling suffixes like 'K' and 'M' can be
193 used up to (G)igabytes. See vsl_space for more information.
194
195 Security options
196 -r <param[,param...]>
197 Make the listed parameters read only. This gives the system ad‐
198 ministrator a way to limit what the Varnish CLI can do. Con‐
199 sider making parameters such as cc_command, vcc_allow_inline_c
200 and vmod_path read only as these can potentially be used to es‐
201 calate privileges from the CLI.
202
203 -S secret-file
204 Path to a file containing a secret used for authorizing access
205 to the management port. To disable authentication use none.
206
207 If this argument is not provided, a secret drawn from the system
208 PRNG will be written to a file called _.secret in the working
209 directory (see opt_n) with default ownership and permissions of
210 the user having started varnish.
211
212 Thus, users wishing to delegate control over varnish will proba‐
213 bly want to create a custom secret file with appropriate permis‐
214 sions (ie. readable by a unix group to delegate control to).
215
216 -j <jail[,jailoptions]>
217 Specify the jailing mechanism to use. See Jail section.
218
219 Advanced, development and debugging options
220 -d Enables debugging mode: The parent process runs in the fore‐
221 ground with a CLI connection on stdin/stdout, and the child
222 process must be started explicitly with a CLI command. Terminat‐
223 ing the parent process will also terminate the child.
224
225 Only one of -d or -F can be specified, and -d cannot be used to‐
226 gether with -C.
227
228 -C Print VCL code compiled to C language and exit. Specify the VCL
229 file to compile with the -f option. Either -f or -b must be used
230 with -C, and -C cannot be used with -F or -d.
231
232 -V Display the version number and exit. This must be the only op‐
233 tion.
234
235 -h <type[,options]>
236 Specifies the hash algorithm. See Hash Algorithm section for a
237 list of supported algorithms.
238
239 -W waiter
240 Specifies the waiter type to use.
241
242 Hash Algorithm
243 The following hash algorithms are available:
244
245 -h critbit
246 self-scaling tree structure. The default hash algorithm in Var‐
247 nish Cache 2.1 and onwards. In comparison to a more traditional
248 B tree the critbit tree is almost completely lockless. Do not
249 change this unless you are certain what you're doing.
250
251 -h simple_list
252 A simple doubly-linked list. Not recommended for production
253 use.
254
255 -h <classic[,buckets]>
256 A standard hash table. The hash key is the CRC32 of the object's
257 URL modulo the size of the hash table. Each table entry points
258 to a list of elements which share the same hash key. The buckets
259 parameter specifies the number of entries in the hash table.
260 The default is 16383.
261
262 Storage Backend
263 The argument format to define storage backends is:
264
265 -s <[name]=kind[,options]>
266 If name is omitted, Varnish will name storages sN, starting with
267 s0 and incrementing N for every new storage.
268
269 For kind and options see details below.
270
271 Storages can be used in vcl as storage.name, so, for example if myStor‐
272 age was defined by -s myStorage=malloc,5G, it could be used in VCL like
273 so:
274
275 set beresp.storage = storage.myStorage;
276
277 A special name is Transient which is the default storage for un‐
278 cacheable objects as resulting from a pass, hit-for-miss or
279 hit-for-pass.
280
281 If no -s options are given, the default is:
282
283 -s default,100m
284
285 If no Transient storage is defined, the default is an unbound default
286 storage as if defined as:
287
288 -s Transient=default
289
290 The following storage types and options are available:
291
292 -s <default[,size]>
293 The default storage type resolves to umem where available and
294 malloc otherwise.
295
296 -s <malloc[,size]>
297 malloc is a memory based backend.
298
299 -s <umem[,size]>
300 umem is a storage backend which is more efficient than malloc on
301 platforms where it is available.
302
303 See the section on umem in chapter Storage backends of The Var‐
304 nish Users Guide for details.
305
306 -s <file,path[,size[,granularity[,advice]]]>
307 The file backend stores data in a file on disk. The file will be
308 accessed using mmap. Note that this storage provide no cache
309 persistence.
310
311 The path is mandatory. If path points to a directory, a tempo‐
312 rary file will be created in that directory and immediately un‐
313 linked. If path points to a non-existing file, the file will be
314 created.
315
316 If size is omitted, and path points to an existing file with a
317 size greater than zero, the size of that file will be used. If
318 not, an error is reported.
319
320 Granularity sets the allocation block size. Defaults to the sys‐
321 tem page size or the filesystem block size, whichever is larger.
322
323 Advice tells the kernel how varnishd expects to use this mapped
324 region so that the kernel can choose the appropriate read-ahead
325 and caching techniques. Possible values are normal, random and
326 sequential, corresponding to MADV_NORMAL, MADV_RANDOM and
327 MADV_SEQUENTIAL madvise() advice argument, respectively. De‐
328 faults to random.
329
330 -s <persistent,path,size>
331 Persistent storage. Varnish will store objects in a file in a
332 manner that will secure the survival of most of the objects in
333 the event of a planned or unplanned shutdown of Varnish. The
334 persistent storage backend has multiple issues with it and will
335 likely be removed from a future version of Varnish.
336
337 Jail
338 Varnish jails are a generalization over various platform specific meth‐
339 ods to reduce the privileges of varnish processes. They may have spe‐
340 cific options. Available jails are:
341
342 -j <solaris[,worker=`privspec`]>
343 Reduce privileges(5) for varnishd and sub-processes to the mini‐
344 mally required set. Only available on platforms which have the
345 setppriv(2) call.
346
347 The optional worker argument can be used to pass a privi‐
348 lege-specification (see ppriv(1)) by which to extend the effec‐
349 tive set of the varnish worker process. While extended privi‐
350 leges may be required by custom vmods, not using the worker op‐
351 tion is always more secure.
352
353 Example to grant basic privileges to the worker process:
354
355 -j solaris,worker=basic
356
357 -j <unix[,user=`user`][,ccgroup=`group`][,workuser=`user`]>
358 Default on all other platforms when varnishd is started with an
359 effective uid of 0 ("as root").
360
361 With the unix jail mechanism activated, varnish will switch to
362 an alternative user for subprocesses and change the effective
363 uid of the master process whenever possible.
364
365 The optional user argument specifies which alternative user to
366 use. It defaults to varnish.
367
368 The optional ccgroup argument specifies a group to add to var‐
369 nish subprocesses requiring access to a c-compiler. There is no
370 default.
371
372 The optional workuser argument specifies an alternative user to
373 use for the worker process. It defaults to vcache.
374
375 The users given for the user and workuser arguments need to have
376 the same primary ("login") group.
377
378 To set up a system for the default users with a group name var‐
379 nish, shell commands similar to these may be used:
380
381 groupadd varnish
382 useradd -g varnish -d /nonexistent -s /bin/false \
383 -c "Varnish-Cache Daemon User" varnish
384 useradd -g varnish -d /nonexistent -s /bin/false \
385 -c "Varnish-Cache Worker User" vcache
386
387 -j none
388 last resort jail choice: With jail mechanism none, varnish will
389 run all processes with the privileges it was started with.
390
391 Management Interface
392 If the -T option was specified, varnishd will offer a command-line man‐
393 agement interface on the specified address and port. The recommended
394 way of connecting to the command-line management interface is through
395 varnishadm(1).
396
397 The commands available are documented in varnish-cli(7).
398
399 CLI Command File
400 The -I option makes it possible to run arbitrary management commands
401 when varnishd is launched, before the worker process is started. In
402 particular, this is the way to load configurations, apply labels to
403 them, and make a VCL instance active that uses those labels on startup:
404
405 vcl.load panic /etc/varnish_panic.vcl
406 vcl.load siteA0 /etc/varnish_siteA.vcl
407 vcl.load siteB0 /etc/varnish_siteB.vcl
408 vcl.load siteC0 /etc/varnish_siteC.vcl
409 vcl.label siteA siteA0
410 vcl.label siteB siteB0
411 vcl.label siteC siteC0
412 vcl.load main /etc/varnish_main.vcl
413 vcl.use main
414
415 Every line in the file, including the last line, must be terminated by
416 a newline or carriage return.
417
418 If a command in the file is prefixed with '-', failure will not abort
419 the startup.
420
421 Note that it is necessary to include an explicit vcl.use command to se‐
422 lect which VCL should be the active VCL when relying on CLI Command
423 File to load the configurations at startup.
424
426 Run Time Parameter Flags
427 Runtime parameters are marked with shorthand flags to avoid repeating
428 the same text over and over in the table below. The meaning of the
429 flags are:
430
431 • experimental
432
433 We have no solid information about good/bad/optimal values for this
434 parameter. Feedback with experience and observations are most wel‐
435 come.
436
437 • delayed
438
439 This parameter can be changed on the fly, but will not take effect
440 immediately.
441
442 • restart
443
444 The worker process must be stopped and restarted, before this parame‐
445 ter takes effect.
446
447 • reload
448
449 The VCL programs must be reloaded for this parameter to take effect.
450
451 • wizard
452
453 Do not touch unless you really know what you're doing.
454
455 • only_root
456
457 Only works if varnishd is running as root.
458
459 Default Value Exceptions on 32 bit Systems
460 Be aware that on 32 bit systems, certain default or maximum values are
461 reduced relative to the values listed below, in order to conserve VM
462 space:
463
464 • workspace_client: 24k
465
466 • workspace_backend: 20k
467
468 • http_resp_size: 8k
469
470 • http_req_size: 12k
471
472 • gzip_buffer: 4k
473
474 • vsl_buffer: 4k
475
476 • vsl_space: 1G (maximum)
477
478 • thread_pool_stack: 64k
479
480 List of Parameters
481 This text is produced from the same text you will find in the CLI if
482 you use the param.show command:
483
484 accept_filter
485 NB: This parameter depends on a feature which is not available on all
486 platforms.
487
488 • Units: bool
489
490 • Default: on (if your platform supports accept filters)
491
492 Enable kernel accept-filters. This may require a kernel module to be
493 loaded to have an effect when enabled.
494
495 Enabling accept_filter may prevent some requests to reach Varnish in
496 the first place. Malformed requests may go unnoticed and not increase
497 the client_req_400 counter. GET or HEAD requests with a body may be
498 blocked altogether.
499
500 acceptor_sleep_decay
501 • Default: 0.9
502
503 • Minimum: 0
504
505 • Maximum: 1
506
507 • Flags: experimental
508
509 If we run out of resources, such as file descriptors or worker threads,
510 the acceptor will sleep between accepts. This parameter (multiplica‐
511 tively) reduce the sleep duration for each successful accept. (ie: 0.9
512 = reduce by 10%)
513
514 acceptor_sleep_incr
515 • Units: seconds
516
517 • Default: 0.000
518
519 • Minimum: 0.000
520
521 • Maximum: 1.000
522
523 • Flags: experimental
524
525 If we run out of resources, such as file descriptors or worker threads,
526 the acceptor will sleep between accepts. This parameter control how
527 much longer we sleep, each time we fail to accept a new connection.
528
529 acceptor_sleep_max
530 • Units: seconds
531
532 • Default: 0.050
533
534 • Minimum: 0.000
535
536 • Maximum: 10.000
537
538 • Flags: experimental
539
540 If we run out of resources, such as file descriptors or worker threads,
541 the acceptor will sleep between accepts. This parameter limits how
542 long it can sleep between attempts to accept new connections.
543
544 auto_restart
545 • Units: bool
546
547 • Default: on
548
549 Automatically restart the child/worker process if it dies.
550
551 backend_idle_timeout
552 • Units: seconds
553
554 • Default: 60.000
555
556 • Minimum: 1.000
557
558 Timeout before we close unused backend connections.
559
560 backend_local_error_holddown
561 • Units: seconds
562
563 • Default: 10.000
564
565 • Minimum: 0.000
566
567 • Flags: experimental
568
569 When connecting to backends, certain error codes (EADDRNOTAVAIL, EAC‐
570 CESS, EPERM) signal a local resource shortage or configuration issue
571 for which retrying connection attempts may worsen the situation due to
572 the complexity of the operations involved in the kernel. This parame‐
573 ter prevents repeated connection attempts for the configured duration.
574
575 backend_remote_error_holddown
576 • Units: seconds
577
578 • Default: 0.250
579
580 • Minimum: 0.000
581
582 • Flags: experimental
583
584 When connecting to backends, certain error codes (ECONNREFUSED, ENETUN‐
585 REACH) signal fundamental connection issues such as the backend not ac‐
586 cepting connections or routing problems for which repeated connection
587 attempts are considered useless This parameter prevents repeated con‐
588 nection attempts for the configured duration.
589
590 ban_cutoff
591 • Units: bans
592
593 • Default: 0
594
595 • Minimum: 0
596
597 • Flags: experimental
598
599 Expurge long tail content from the cache to keep the number of bans be‐
600 low this value. 0 disables.
601
602 When this parameter is set to a non-zero value, the ban lurker contin‐
603 ues to work the ban list as usual top to bottom, but when it reaches
604 the ban_cutoff-th ban, it treats all objects as if they matched a ban
605 and expurges them from cache. As actively used objects get tested
606 against the ban list at request time and thus are likely to be associ‐
607 ated with bans near the top of the ban list, with ban_cutoff, least re‐
608 cently accessed objects (the "long tail") are removed.
609
610 This parameter is a safety net to avoid bad response times due to bans
611 being tested at lookup time. Setting a cutoff trades response time for
612 cache efficiency. The recommended value is proportional to
613 rate(bans_lurker_tests_tested) / n_objects while the ban lurker is
614 working, which is the number of bans the system can sustain. The addi‐
615 tional latency due to request ban testing is in the order of ban_cutoff
616 / rate(bans_lurker_tests_tested). For example, for
617 rate(bans_lurker_tests_tested) = 2M/s and a tolerable latency of 100ms,
618 a good value for ban_cutoff may be 200K.
619
620 ban_dups
621 • Units: bool
622
623 • Default: on
624
625 Eliminate older identical bans when a new ban is added. This saves CPU
626 cycles by not comparing objects to identical bans. This is a waste of
627 time if you have many bans which are never identical.
628
629 ban_lurker_age
630 • Units: seconds
631
632 • Default: 60.000
633
634 • Minimum: 0.000
635
636 The ban lurker will ignore bans until they are this old. When a ban is
637 added, the active traffic will be tested against it as part of object
638 lookup. Because many applications issue bans in bursts, this parameter
639 holds the ban-lurker off until the rush is over. This should be set to
640 the approximate time which a ban-burst takes.
641
642 ban_lurker_batch
643 • Default: 1000
644
645 • Minimum: 1
646
647 The ban lurker sleeps ${ban_lurker_sleep} after examining this many ob‐
648 jects. Use this to pace the ban-lurker if it eats too many resources.
649
650 ban_lurker_holdoff
651 • Units: seconds
652
653 • Default: 0.010
654
655 • Minimum: 0.000
656
657 • Flags: experimental
658
659 How long the ban lurker sleeps when giving way to lookup due to lock
660 contention.
661
662 ban_lurker_sleep
663 • Units: seconds
664
665 • Default: 0.010
666
667 • Minimum: 0.000
668
669 How long the ban lurker sleeps after examining ${ban_lurker_batch} ob‐
670 jects. Use this to pace the ban-lurker if it eats too many resources.
671 A value of zero will disable the ban lurker entirely.
672
673 between_bytes_timeout
674 • Units: seconds
675
676 • Default: 60.000
677
678 • Minimum: 0.000
679
680 We only wait for this many seconds between bytes received from the
681 backend before giving up the fetch. VCL values, per backend or per
682 backend request take precedence. This parameter does not apply to
683 pipe'ed requests.
684
685 cc_command
686 NB: The actual default value for this parameter depends on the Varnish
687 build environment and options.
688
689 • Default: exec $CC $CFLAGS %w -shared -o %o %s
690
691 • Flags: must_reload
692
693 The command used for compiling the C source code to a dlopen(3) load‐
694 able object. The following expansions can be used:
695
696 • %s: the source file name
697
698 • %o: the output file name
699
700 • %w: the cc_warnings parameter
701
702 • %d: the raw default cc_command
703
704 • %D: the expanded default cc_command
705
706 • %n: the working directory (-n option)
707
708 • %%: a percent sign
709
710 Unknown percent expansion sequences are ignored, and to avoid future
711 incompatibilities percent characters should be escaped with a double
712 percent sequence.
713
714 The %d and %D expansions allow passing the parameter's default value to
715 a wrapper script to perform additional processing.
716
717 cc_warnings
718 NB: The actual default value for this parameter depends on the Varnish
719 build environment and options.
720
721 • Default: -Wall -Werror
722
723 • Flags: must_reload
724
725 Warnings used when compiling the C source code with the cc_command pa‐
726 rameter. By default, VCL is compiled with the same set of warnings as
727 Varnish itself.
728
729 cli_limit
730 • Units: bytes
731
732 • Default: 48k
733
734 • Minimum: 128b
735
736 • Maximum: 99999999b
737
738 Maximum size of CLI response. If the response exceeds this limit, the
739 response code will be 201 instead of 200 and the last line will indi‐
740 cate the truncation.
741
742 cli_timeout
743 • Units: seconds
744
745 • Default: 60.000
746
747 • Minimum: 0.000
748
749 Timeout for the child's replies to CLI requests from the mgt_param.
750
751 clock_skew
752 • Units: seconds
753
754 • Default: 10
755
756 • Minimum: 0
757
758 How much clockskew we are willing to accept between the backend and our
759 own clock.
760
761 clock_step
762 • Units: seconds
763
764 • Default: 1.000
765
766 • Minimum: 0.000
767
768 How much observed clock step we are willing to accept before we panic.
769
770 connect_timeout
771 • Units: seconds
772
773 • Default: 3.500
774
775 • Minimum: 0.000
776
777 Default connection timeout for backend connections. We only try to con‐
778 nect to the backend for this many seconds before giving up. VCL can
779 override this default value for each backend and backend request.
780
781 critbit_cooloff
782 • Units: seconds
783
784 • Default: 180.000
785
786 • Minimum: 60.000
787
788 • Maximum: 254.000
789
790 • Flags: wizard
791
792 How long the critbit hasher keeps deleted objheads on the cooloff list.
793
794 debug
795 • Default: none
796
797 Enable/Disable various kinds of debugging.
798
799 none Disable all debugging
800
801 Use +/- prefix to set/reset individual bits:
802
803 req_state
804 VSL Request state engine
805
806 workspace
807 VSL Workspace operations
808
809 waitinglist
810 VSL Waitinglist events
811
812 syncvsl
813 Make VSL synchronous
814
815 hashedge
816 Edge cases in Hash
817
818 vclrel Rapid VCL release
819
820 lurker VSL Ban lurker
821
822 esi_chop
823 Chop ESI fetch to bits
824
825 flush_head
826 Flush after http1 head
827
828 vtc_mode
829 Varnishtest Mode
830
831 witness
832 Emit WITNESS lock records
833
834 vsm_keep
835 Keep the VSM file on restart
836
837 slow_acceptor
838 Slow down Acceptor
839
840 h2_nocheck
841 Disable various H2 checks
842
843 vmod_so_keep
844 Keep copied VMOD libraries
845
846 processors
847 Fetch/Deliver processors
848
849 protocol
850 Protocol debugging
851
852 vcl_keep
853 Keep VCL C and so files
854
855 lck Additional lock statistics
856
857 default_grace
858 • Units: seconds
859
860 • Default: 10s
861
862 • Minimum: 0.000
863
864 • Flags: obj_sticky
865
866 Default grace period. We will deliver an object this long after it has
867 expired, provided another thread is attempting to get a new copy.
868
869 default_keep
870 • Units: seconds
871
872 • Default: 0s
873
874 • Minimum: 0.000
875
876 • Flags: obj_sticky
877
878 Default keep period. We will keep a useless object around this long,
879 making it available for conditional backend fetches. That means that
880 the object will be removed from the cache at the end of ttl+grace+keep.
881
882 default_ttl
883 • Units: seconds
884
885 • Default: 2m
886
887 • Minimum: 0.000
888
889 • Flags: obj_sticky
890
891 The TTL assigned to objects if neither the backend nor the VCL code as‐
892 signs one.
893
894 experimental
895 • Default: none
896
897 Enable/Disable experimental features.
898
899 none Disable all experimental features
900
901 Use +/- prefix to set/reset individual bits:
902
903 drop_pools
904 Drop thread pools
905
906 feature
907 • Default: +validate_headers
908
909 Enable/Disable various minor features.
910
911 default
912 Set default value
913
914 none Disable all features.
915
916 Use +/- prefix to enable/disable individual feature:
917
918 http2 Enable HTTP/2 protocol support.
919
920 short_panic
921 Short panic message.
922
923 no_coredump
924 No coredumps. Must be set before child process starts.
925
926 https_scheme
927 Extract host from full URI in the HTTP/1 request line, if the
928 scheme is https.
929
930 http_date_postel
931 Tolerate non compliant timestamp headers like Date, Last-Mod‐
932 ified, Expires etc.
933
934 esi_ignore_https
935 Convert <esi:include src"https://... to http://...
936
937 esi_disable_xml_check
938 Allow ESI processing on non-XML ESI bodies
939
940 esi_ignore_other_elements
941 Ignore XML syntax errors in ESI bodies.
942
943 esi_remove_bom
944 Ignore UTF-8 BOM in ESI bodies.
945
946 esi_include_onerror
947 Parse the onerror attribute of <esi:include> tags.
948
949 wait_silo
950 Wait for persistent silos to completely load before serving
951 requests.
952
953 validate_headers
954 Validate all header set operations to conform to RFC7230.
955
956 busy_stats_rate
957 Make busy workers comply with thread_stats_rate.
958
959 fetch_chunksize
960 • Units: bytes
961
962 • Default: 16k
963
964 • Minimum: 4k
965
966 • Flags: experimental
967
968 The default chunksize used by fetcher. This should be bigger than the
969 majority of objects with short TTLs. Internal limits in the stor‐
970 age_file module makes increases above 128kb a dubious idea.
971
972 fetch_maxchunksize
973 • Units: bytes
974
975 • Default: 0.25G
976
977 • Minimum: 64k
978
979 • Flags: experimental
980
981 The maximum chunksize we attempt to allocate from storage. Making this
982 too large may cause delays and storage fragmentation.
983
984 first_byte_timeout
985 • Units: seconds
986
987 • Default: 60.000
988
989 • Minimum: 0.000
990
991 Default timeout for receiving first byte from backend. We only wait for
992 this many seconds for the first byte before giving up. VCL can over‐
993 ride this default value for each backend and backend request. This pa‐
994 rameter does not apply to pipe'ed requests.
995
996 gzip_buffer
997 • Units: bytes
998
999 • Default: 32k
1000
1001 • Minimum: 2k
1002
1003 • Flags: experimental
1004
1005 Size of malloc buffer used for gzip processing. These buffers are used
1006 for in-transit data, for instance gunzip'ed data being sent to a
1007 client.Making this space to small results in more overhead, writes to
1008 sockets etc, making it too big is probably just a waste of memory.
1009
1010 gzip_level
1011 • Default: 6
1012
1013 • Minimum: 0
1014
1015 • Maximum: 9
1016
1017 Gzip compression level: 0=debug, 1=fast, 9=best
1018
1019 gzip_memlevel
1020 • Default: 8
1021
1022 • Minimum: 1
1023
1024 • Maximum: 9
1025
1026 Gzip memory level 1=slow/least, 9=fast/most compression. Memory impact
1027 is 1=1k, 2=2k, ... 9=256k.
1028
1029 h2_header_table_size
1030 • Units: bytes
1031
1032 • Default: 4k
1033
1034 • Minimum: 0b
1035
1036 HTTP2 header table size. This is the size that will be used for the
1037 HPACK dynamic decoding table.
1038
1039 h2_initial_window_size
1040 • Units: bytes
1041
1042 • Default: 65535b
1043
1044 • Minimum: 65535b
1045
1046 • Maximum: 2147483647b
1047
1048 HTTP2 initial flow control window size.
1049
1050 h2_max_concurrent_streams
1051 • Units: streams
1052
1053 • Default: 100
1054
1055 • Minimum: 0
1056
1057 HTTP2 Maximum number of concurrent streams. This is the number of re‐
1058 quests that can be active at the same time for a single HTTP2 connec‐
1059 tion.
1060
1061 h2_max_frame_size
1062 • Units: bytes
1063
1064 • Default: 16k
1065
1066 • Minimum: 16k
1067
1068 • Maximum: 16777215b
1069
1070 HTTP2 maximum per frame payload size we are willing to accept.
1071
1072 h2_max_header_list_size
1073 • Units: bytes
1074
1075 • Default: 2147483647b
1076
1077 • Minimum: 0b
1078
1079 HTTP2 maximum size of an uncompressed header list.
1080
1081 h2_rx_window_increment
1082 • Units: bytes
1083
1084 • Default: 1M
1085
1086 • Minimum: 1M
1087
1088 • Maximum: 1G
1089
1090 • Flags: wizard
1091
1092 HTTP2 Receive Window Increments. How big credits we send in WINDOW_UP‐
1093 DATE frames Only affects incoming request bodies (ie: POST, PUT etc.)
1094
1095 h2_rx_window_low_water
1096 • Units: bytes
1097
1098 • Default: 10M
1099
1100 • Minimum: 65535b
1101
1102 • Maximum: 1G
1103
1104 • Flags: wizard
1105
1106 HTTP2 Receive Window low water mark. We try to keep the window at
1107 least this big Only affects incoming request bodies (ie: POST, PUT
1108 etc.)
1109
1110 h2_rxbuf_storage
1111 • Default: Transient
1112
1113 • Flags: must_restart
1114
1115 The name of the storage backend that HTTP/2 receive buffers should be
1116 allocated from.
1117
1118 http1_iovs
1119 • Units: struct iovec (=16 bytes)
1120
1121 • Default: 64
1122
1123 • Minimum: 5
1124
1125 • Maximum: 1024
1126
1127 • Flags: wizard
1128
1129 Number of io vectors to allocate for HTTP1 protocol transmission. A
1130 HTTP1 header needs 7 + 2 per HTTP header field. Allocated from
1131 workspace_thread.
1132
1133 http_gzip_support
1134 • Units: bool
1135
1136 • Default: on
1137
1138 Enable gzip support. When enabled Varnish request compressed objects
1139 from the backend and store them compressed. If a client does not sup‐
1140 port gzip encoding Varnish will uncompress compressed objects on de‐
1141 mand. Varnish will also rewrite the Accept-Encoding header of clients
1142 indicating support for gzip to:
1143 Accept-Encoding: gzip
1144
1145 Clients that do not support gzip will have their Accept-Encoding header
1146 removed. For more information on how gzip is implemented please see the
1147 chapter on gzip in the Varnish reference.
1148
1149 When gzip support is disabled the variables beresp.do_gzip and
1150 beresp.do_gunzip have no effect in VCL.
1151
1152 http_max_hdr
1153 • Units: header lines
1154
1155 • Default: 64
1156
1157 • Minimum: 32
1158
1159 • Maximum: 65535
1160
1161 Maximum number of HTTP header lines we allow in
1162 {req|resp|bereq|beresp}.http (obj.http is autosized to the exact number
1163 of headers). Cheap, ~20 bytes, in terms of workspace memory. Note
1164 that the first line occupies five header lines.
1165
1166 http_range_support
1167 • Units: bool
1168
1169 • Default: on
1170
1171 Enable support for HTTP Range headers.
1172
1173 http_req_hdr_len
1174 • Units: bytes
1175
1176 • Default: 8k
1177
1178 • Minimum: 40b
1179
1180 Maximum length of any HTTP client request header we will allow. The
1181 limit is inclusive its continuation lines.
1182
1183 http_req_size
1184 • Units: bytes
1185
1186 • Default: 32k
1187
1188 • Minimum: 0.25k
1189
1190 Maximum number of bytes of HTTP client request we will deal with. This
1191 is a limit on all bytes up to the double blank line which ends the HTTP
1192 request. The memory for the request is allocated from the client
1193 workspace (param: workspace_client) and this parameter limits how much
1194 of that the request is allowed to take up.
1195
1196 http_resp_hdr_len
1197 • Units: bytes
1198
1199 • Default: 8k
1200
1201 • Minimum: 40b
1202
1203 Maximum length of any HTTP backend response header we will allow. The
1204 limit is inclusive its continuation lines.
1205
1206 http_resp_size
1207 • Units: bytes
1208
1209 • Default: 32k
1210
1211 • Minimum: 0.25k
1212
1213 Maximum number of bytes of HTTP backend response we will deal with.
1214 This is a limit on all bytes up to the double blank line which ends the
1215 HTTP response. The memory for the response is allocated from the back‐
1216 end workspace (param: workspace_backend) and this parameter limits how
1217 much of that the response is allowed to take up.
1218
1219 idle_send_timeout
1220 • Units: seconds
1221
1222 • Default: 60.000
1223
1224 • Minimum: 0.000
1225
1226 • Flags: delayed
1227
1228 Send timeout for individual pieces of data on client connections. May
1229 get extended if 'send_timeout' applies.
1230
1231 When this timeout is hit, the session is closed.
1232
1233 See the man page for setsockopt(2) or socket(7) under SO_SNDTIMEO for
1234 more information.
1235
1236 listen_depth
1237 • Units: connections
1238
1239 • Default: 1024
1240
1241 • Minimum: 0
1242
1243 • Flags: must_restart
1244
1245 Listen queue depth.
1246
1247 lru_interval
1248 • Units: seconds
1249
1250 • Default: 2.000
1251
1252 • Minimum: 0.000
1253
1254 • Flags: experimental
1255
1256 Grace period before object moves on LRU list. Objects are only moved
1257 to the front of the LRU list if they have not been moved there already
1258 inside this timeout period. This reduces the amount of lock operations
1259 necessary for LRU list access.
1260
1261 max_esi_depth
1262 • Units: levels
1263
1264 • Default: 5
1265
1266 • Minimum: 0
1267
1268 Maximum depth of esi:include processing.
1269
1270 max_restarts
1271 • Units: restarts
1272
1273 • Default: 4
1274
1275 • Minimum: 0
1276
1277 Upper limit on how many times a request can restart.
1278
1279 max_retries
1280 • Units: retries
1281
1282 • Default: 4
1283
1284 • Minimum: 0
1285
1286 Upper limit on how many times a backend fetch can retry.
1287
1288 max_vcl
1289 • Default: 100
1290
1291 • Minimum: 0
1292
1293 Threshold of loaded VCL programs. (VCL labels are not counted.) Pa‐
1294 rameter max_vcl_handling determines behaviour.
1295
1296 max_vcl_handling
1297 • Default: 1
1298
1299 • Minimum: 0
1300
1301 • Maximum: 2
1302
1303 Behaviour when attempting to exceed max_vcl loaded VCL.
1304
1305 • 0 - Ignore max_vcl parameter.
1306
1307 • 1 - Issue warning.
1308
1309 • 2 - Refuse loading VCLs.
1310
1311 nuke_limit
1312 • Units: allocations
1313
1314 • Default: 50
1315
1316 • Minimum: 0
1317
1318 • Flags: experimental
1319
1320 Maximum number of objects we attempt to nuke in order to make space for
1321 a object body.
1322
1323 pcre2_depth_limit
1324 • Default: 20
1325
1326 • Minimum: 1
1327
1328 The recursion depth-limit for the internal match logic in a
1329 pcre2_match().
1330
1331 (See: pcre2_set_depth_limit() in pcre2 docs.)
1332
1333 This puts an upper limit on the amount of stack used by PCRE2 for cer‐
1334 tain classes of regular expressions.
1335
1336 We have set the default value low in order to prevent crashes, at the
1337 cost of possible regexp matching failures.
1338
1339 Matching failures will show up in the log as VCL_Error messages.
1340
1341 pcre2_jit_compilation
1342 • Units: bool
1343
1344 • Default: on
1345
1346 Use the pcre2 JIT compiler if available.
1347
1348 pcre2_match_limit
1349 • Default: 10000
1350
1351 • Minimum: 1
1352
1353 The limit for the number of calls to the internal match logic in
1354 pcre2_match().
1355
1356 (See: pcre2_set_match_limit() in pcre2 docs.)
1357
1358 This parameter limits how much CPU time regular expression matching can
1359 soak up.
1360
1361 ping_interval
1362 • Units: seconds
1363
1364 • Default: 3
1365
1366 • Minimum: 0
1367
1368 • Flags: must_restart
1369
1370 Interval between pings from parent to child. Zero will disable pinging
1371 entirely, which makes it possible to attach a debugger to the child.
1372
1373 pipe_sess_max
1374 • Units: connections
1375
1376 • Default: 0
1377
1378 • Minimum: 0
1379
1380 Maximum number of sessions dedicated to pipe transactions.
1381
1382 pipe_timeout
1383 • Units: seconds
1384
1385 • Default: 60.000
1386
1387 • Minimum: 0.000
1388
1389 Idle timeout for PIPE sessions. If nothing have been received in either
1390 direction for this many seconds, the session is closed.
1391
1392 pool_req
1393 • Default: 10,100,10
1394
1395 Parameters for per worker pool request memory pool.
1396
1397 The three numbers are:
1398
1399 min_pool
1400 minimum size of free pool.
1401
1402 max_pool
1403 maximum size of free pool.
1404
1405 max_age
1406 max age of free element.
1407
1408 pool_sess
1409 • Default: 10,100,10
1410
1411 Parameters for per worker pool session memory pool.
1412
1413 The three numbers are:
1414
1415 min_pool
1416 minimum size of free pool.
1417
1418 max_pool
1419 maximum size of free pool.
1420
1421 max_age
1422 max age of free element.
1423
1424 pool_vbo
1425 • Default: 10,100,10
1426
1427 Parameters for backend object fetch memory pool.
1428
1429 The three numbers are:
1430
1431 min_pool
1432 minimum size of free pool.
1433
1434 max_pool
1435 maximum size of free pool.
1436
1437 max_age
1438 max age of free element.
1439
1440 prefer_ipv6
1441 • Units: bool
1442
1443 • Default: off
1444
1445 Prefer IPv6 address when connecting to backends which have both IPv4
1446 and IPv6 addresses.
1447
1448 rush_exponent
1449 • Units: requests per request
1450
1451 • Default: 3
1452
1453 • Minimum: 2
1454
1455 • Flags: experimental
1456
1457 How many parked request we start for each completed request on the ob‐
1458 ject. NB: Even with the implict delay of delivery, this parameter con‐
1459 trols an exponential increase in number of worker threads.
1460
1461 send_timeout
1462 • Units: seconds
1463
1464 • Default: 600.000
1465
1466 • Minimum: 0.000
1467
1468 • Flags: delayed
1469
1470 Total timeout for ordinary HTTP1 responses. Does not apply to some in‐
1471 ternally generated errors and pipe mode.
1472
1473 When 'idle_send_timeout' is hit while sending an HTTP1 response, the
1474 timeout is extended unless the total time already taken for sending the
1475 response in its entirety exceeds this many seconds.
1476
1477 When this timeout is hit, the session is closed
1478
1479 shortlived
1480 • Units: seconds
1481
1482 • Default: 10.000
1483
1484 • Minimum: 0.000
1485
1486 Objects created with (ttl+grace+keep) shorter than this are always put
1487 in transient storage.
1488
1489 sigsegv_handler
1490 • Units: bool
1491
1492 • Default: on
1493
1494 • Flags: must_restart
1495
1496 Install a signal handler which tries to dump debug information on seg‐
1497 mentation faults, bus errors and abort signals.
1498
1499 syslog_cli_traffic
1500 • Units: bool
1501
1502 • Default: on
1503
1504 Log all CLI traffic to syslog(LOG_INFO).
1505
1506 tcp_fastopen
1507 NB: This parameter depends on a feature which is not available on all
1508 platforms.
1509
1510 • Units: bool
1511
1512 • Default: off
1513
1514 • Flags: must_restart
1515
1516 Enable TCP Fast Open extension.
1517
1518 tcp_keepalive_intvl
1519 NB: This parameter depends on a feature which is not available on all
1520 platforms.
1521
1522 • Units: seconds
1523
1524 • Default: platform dependent
1525
1526 • Minimum: 1.000
1527
1528 • Maximum: 100.000
1529
1530 • Flags: experimental
1531
1532 The number of seconds between TCP keep-alive probes. Ignored for Unix
1533 domain sockets.
1534
1535 tcp_keepalive_probes
1536 NB: This parameter depends on a feature which is not available on all
1537 platforms.
1538
1539 • Units: probes
1540
1541 • Default: platform dependent
1542
1543 • Minimum: 1
1544
1545 • Maximum: 100
1546
1547 • Flags: experimental
1548
1549 The maximum number of TCP keep-alive probes to send before giving up
1550 and killing the connection if no response is obtained from the other
1551 end. Ignored for Unix domain sockets.
1552
1553 tcp_keepalive_time
1554 NB: This parameter depends on a feature which is not available on all
1555 platforms.
1556
1557 • Units: seconds
1558
1559 • Default: platform dependent
1560
1561 • Minimum: 1.000
1562
1563 • Maximum: 7200.000
1564
1565 • Flags: experimental
1566
1567 The number of seconds a connection needs to be idle before TCP begins
1568 sending out keep-alive probes. Ignored for Unix domain sockets.
1569
1570 thread_pool_add_delay
1571 • Units: seconds
1572
1573 • Default: 0.000
1574
1575 • Minimum: 0.000
1576
1577 • Flags: experimental
1578
1579 Wait at least this long after creating a thread.
1580
1581 Some (buggy) systems may need a short (sub-second) delay between creat‐
1582 ing threads. Set this to a few milliseconds if you see the
1583 'threads_failed' counter grow too much.
1584
1585 Setting this too high results in insufficient worker threads.
1586
1587 thread_pool_destroy_delay
1588 • Units: seconds
1589
1590 • Default: 1.000
1591
1592 • Minimum: 0.010
1593
1594 • Flags: delayed, experimental
1595
1596 Wait this long after destroying a thread.
1597
1598 This controls the decay of thread pools when idle(-ish).
1599
1600 thread_pool_fail_delay
1601 • Units: seconds
1602
1603 • Default: 0.200
1604
1605 • Minimum: 0.010
1606
1607 • Flags: experimental
1608
1609 Wait at least this long after a failed thread creation before trying to
1610 create another thread.
1611
1612 Failure to create a worker thread is often a sign that the end is
1613 near, because the process is running out of some resource. This delay
1614 tries to not rush the end on needlessly.
1615
1616 If thread creation failures are a problem, check that thread_pool_max
1617 is not too high.
1618
1619 It may also help to increase thread_pool_timeout and thread_pool_min,
1620 to reduce the rate at which treads are destroyed and later recreated.
1621
1622 thread_pool_max
1623 • Units: threads
1624
1625 • Default: 5000
1626
1627 • Minimum: thread_pool_min
1628
1629 • Flags: delayed
1630
1631 The maximum number of worker threads in each pool.
1632
1633 Do not set this higher than you have to, since excess worker threads
1634 soak up RAM and CPU and generally just get in the way of getting work
1635 done.
1636
1637 thread_pool_min
1638 • Units: threads
1639
1640 • Default: 100
1641
1642 • Minimum: 5
1643
1644 • Maximum: thread_pool_max
1645
1646 • Flags: delayed
1647
1648 The minimum number of worker threads in each pool.
1649
1650 Increasing this may help ramp up faster from low load situations or
1651 when threads have expired.
1652
1653 Technical minimum is 5 threads, but this parameter is strongly recom‐
1654 mended to be at least 10
1655
1656 thread_pool_reserve
1657 • Units: threads
1658
1659 • Default: 0 (auto-tune: 5% of thread_pool_min)
1660
1661 • Maximum: 95% of thread_pool_min
1662
1663 • Flags: delayed
1664
1665 The number of worker threads reserved for vital tasks in each pool.
1666
1667 Tasks may require other tasks to complete (for example, client requests
1668 may require backend requests, http2 sessions require streams, which re‐
1669 quire requests). This reserve is to ensure that lower priority tasks do
1670 not prevent higher priority tasks from running even under high load.
1671
1672 The effective value is at least 5 (the number of internal priority
1673 classes), irrespective of this parameter.
1674
1675 thread_pool_stack
1676 • Units: bytes
1677
1678 • Default: 80k
1679
1680 • Minimum: sysconf(_SC_THREAD_STACK_MIN)
1681
1682 • Flags: delayed
1683
1684 Worker thread stack size. This will likely be rounded up to a multiple
1685 of 4k (or whatever the page_size might be) by the kernel.
1686
1687 The required stack size is primarily driven by the depth of the
1688 call-tree. The most common relevant determining factors in varnish core
1689 code are GZIP (un)compression, ESI processing and regular expression
1690 matches. VMODs may also require significant amounts of additional
1691 stack. The nesting depth of VCL subs is another factor, although typi‐
1692 cally not predominant.
1693
1694 The stack size is per thread, so the maximum total memory required for
1695 worker thread stacks is in the order of size = thread_pools x
1696 thread_pool_max x thread_pool_stack.
1697
1698 Thus, in particular for setups with many threads, keeping the stack
1699 size at a minimum helps reduce the amount of memory required by Var‐
1700 nish.
1701
1702 On the other hand, thread_pool_stack must be large enough under all
1703 circumstances, otherwise varnish will crash due to a stack overflow.
1704 Usually, a stack overflow manifests itself as a segmentation fault (aka
1705 segfault / SIGSEGV) with the faulting address being near the stack
1706 pointer (sp).
1707
1708 Unless stack usage can be reduced, thread_pool_stack must be increased
1709 when a stack overflow occurs. Setting it in 150%-200% increments is
1710 recommended until stack overflows cease to occur.
1711
1712 thread_pool_timeout
1713 • Units: seconds
1714
1715 • Default: 300.000
1716
1717 • Minimum: 10.000
1718
1719 • Flags: delayed, experimental
1720
1721 Thread idle threshold.
1722
1723 Threads in excess of thread_pool_min, which have been idle for at least
1724 this long, will be destroyed.
1725
1726 thread_pool_watchdog
1727 • Units: seconds
1728
1729 • Default: 60.000
1730
1731 • Minimum: 0.100
1732
1733 • Flags: experimental
1734
1735 Thread queue stuck watchdog.
1736
1737 If no queued work have been released for this long, the worker process
1738 panics itself.
1739
1740 thread_pools
1741 • Units: pools
1742
1743 • Default: 2
1744
1745 • Minimum: 1
1746
1747 • Maximum: 32
1748
1749 • Flags: delayed, experimental
1750
1751 Number of worker thread pools.
1752
1753 Increasing the number of worker pools decreases lock contention. Each
1754 worker pool also has a thread accepting new connections, so for very
1755 high rates of incoming new connections on systems with many cores, in‐
1756 creasing the worker pools may be required.
1757
1758 Too many pools waste CPU and RAM resources, and more than one pool for
1759 each CPU is most likely detrimental to performance.
1760
1761 Can be increased on the fly, but decreases require a restart to take
1762 effect, unless the drop_pools experimental debug flag is set.
1763
1764 thread_queue_limit
1765 • Units: requests
1766
1767 • Default: 20
1768
1769 • Minimum: 0
1770
1771 • Flags: experimental
1772
1773 Permitted request queue length per thread-pool.
1774
1775 This sets the number of requests we will queue, waiting for an avail‐
1776 able thread. Above this limit sessions will be dropped instead of
1777 queued.
1778
1779 thread_stats_rate
1780 • Units: requests
1781
1782 • Default: 10
1783
1784 • Minimum: 0
1785
1786 • Flags: experimental
1787
1788 Worker threads accumulate statistics, and dump these into the global
1789 stats counters if the lock is free when they finish a job (re‐
1790 quest/fetch etc.) This parameters defines the maximum number of jobs a
1791 worker thread may handle, before it is forced to dump its accumulated
1792 stats into the global counters.
1793
1794 timeout_idle
1795 • Units: seconds
1796
1797 • Default: 5.000
1798
1799 • Minimum: 0.000
1800
1801 Idle timeout for client connections.
1802
1803 A connection is considered idle until we have received the full request
1804 headers.
1805
1806 This parameter is particularly relevant for HTTP1 keepalive connec‐
1807 tions which are closed unless the next request is received before this
1808 timeout is reached.
1809
1810 timeout_linger
1811 • Units: seconds
1812
1813 • Default: 0.050
1814
1815 • Minimum: 0.000
1816
1817 • Flags: experimental
1818
1819 How long the worker thread lingers on an idle session before handing it
1820 over to the waiter. When sessions are reused, as much as half of all
1821 reuses happen within the first 100 msec of the previous request com‐
1822 pleting. Setting this too high results in worker threads not doing
1823 anything for their keep, setting it too low just means that more ses‐
1824 sions take a detour around the waiter.
1825
1826 transit_buffer
1827 • Units: bytes
1828
1829 • Default: 0b
1830
1831 • Minimum: 0b
1832
1833 The number of bytes which Varnish buffers for uncacheable backend
1834 streaming fetches - in other words, how many bytes Varnish reads from
1835 the backend ahead of what has been sent to the client. A zero value
1836 means no limit, the object is fetched as fast as possible.
1837
1838 When dealing with slow clients, setting this parameter to non-zero can
1839 prevent large uncacheable objects from being stored in full when the
1840 intent is to simply stream them to the client. As a result, a slow
1841 client transaction holds onto a backend connection until the end of the
1842 delivery.
1843
1844 This parameter is the default to the VCL variable beresp.transit_buf‐
1845 fer, which can be used to control the transit buffer per backend re‐
1846 quest.
1847
1848 vary_notice
1849 • Units: variants
1850
1851 • Default: 10
1852
1853 • Minimum: 1
1854
1855 How many variants need to be evaluated to log a Notice that there might
1856 be too many variants.
1857
1858 vcc_allow_inline_c
1859 Deprecated alias for the vcc_feature parameter.
1860
1861 vcc_err_unref
1862 Deprecated alias for the vcc_feature parameter.
1863
1864 vcc_feature
1865 • Default: +err_unref,+unsafe_path
1866
1867 Enable/Disable various VCC behaviors.
1868
1869 default
1870 Set default value
1871
1872 none Disable all behaviors.
1873
1874 Use +/- prefix to enable/disable individual behavior:
1875
1876 err_unref
1877 Unreferenced VCL objects result in error.
1878
1879 allow_inline_c
1880 Allow inline C code in VCL.
1881
1882 unsafe_path
1883 Allow '/' in vmod & include paths. Allow 'import ... from
1884 ...'.
1885
1886 vcc_unsafe_path
1887 Deprecated alias for the vcc_feature parameter.
1888
1889 vcl_cooldown
1890 • Units: seconds
1891
1892 • Default: 600.000
1893
1894 • Minimum: 1.000
1895
1896 How long a VCL is kept warm after being replaced as the active VCL
1897 (granularity approximately 30 seconds).
1898
1899 vcl_path
1900 NB: The actual default value for this parameter depends on the Varnish
1901 build environment and options.
1902
1903 • Default: ${sysconfdir}/varnish:${datadir}/varnish/vcl
1904
1905 Directory (or colon separated list of directories) from which relative
1906 VCL filenames (vcl.load and include) are to be found. By default Var‐
1907 nish searches VCL files in both the system configuration and shared
1908 data directories to allow packages to drop their VCL files in a stan‐
1909 dard location where relative includes would work.
1910
1911 vmod_path
1912 NB: The actual default value for this parameter depends on the Varnish
1913 build environment and options.
1914
1915 • Default: ${libdir}/varnish/vmods
1916
1917 Directory (or colon separated list of directories) where VMODs are to
1918 be found.
1919
1920 vsl_buffer
1921 • Units: bytes
1922
1923 • Default: 16k
1924
1925 • Minimum: vsl_reclen + 12 bytes
1926
1927 Bytes of (req-/backend-)workspace dedicated to buffering VSL records.
1928 When this parameter is adjusted, most likely workspace_client and
1929 workspace_backend will have to be adjusted by the same amount.
1930
1931 Setting this too high costs memory, setting it too low will cause more
1932 VSL flushes and likely increase lock-contention on the VSL mutex.
1933
1934 vsl_mask
1935 • Default: -Debug,-ObjProtocol,-ObjStatus,-ObjReason,-Obj‐
1936 Header,-VCL_trace,-ExpKill,-WorkThread,-Hash,-VfpAcct,-H2Rx‐
1937 Hdr,-H2RxBody,-H2TxHdr,-H2TxBody,-VdpAcct
1938
1939 Mask individual VSL messages from being logged.
1940
1941 default
1942 Set default value
1943
1944 Use +/- prefix in front of VSL tag name to unmask/mask individual VSL
1945 messages.
1946
1947 vsl_reclen
1948 • Units: bytes
1949
1950 • Default: 255b
1951
1952 • Minimum: 16b
1953
1954 • Maximum: vsl_buffer - 12 bytes
1955
1956 Maximum number of bytes in SHM log record.
1957
1958 vsl_space
1959 • Units: bytes
1960
1961 • Default: 80M
1962
1963 • Minimum: 1M
1964
1965 • Maximum: 4G
1966
1967 • Flags: must_restart
1968
1969 The amount of space to allocate for the VSL fifo buffer in the VSM mem‐
1970 ory segment. If you make this too small, varnish{ncsa|log} etc will
1971 not be able to keep up. Making it too large just costs memory re‐
1972 sources.
1973
1974 vsm_free_cooldown
1975 • Units: seconds
1976
1977 • Default: 60.000
1978
1979 • Minimum: 10.000
1980
1981 • Maximum: 600.000
1982
1983 How long VSM memory is kept warm after a deallocation (granularity ap‐
1984 proximately 2 seconds).
1985
1986 workspace_backend
1987 • Units: bytes
1988
1989 • Default: 96k
1990
1991 • Minimum: 1k
1992
1993 • Flags: delayed
1994
1995 Bytes of HTTP protocol workspace for backend HTTP req/resp. If larger
1996 than 4k, use a multiple of 4k for VM efficiency.
1997
1998 workspace_client
1999 • Units: bytes
2000
2001 • Default: 96k
2002
2003 • Minimum: 9k
2004
2005 • Flags: delayed
2006
2007 Bytes of HTTP protocol workspace for clients HTTP req/resp. Use a mul‐
2008 tiple of 4k for VM efficiency. For HTTP/2 compliance this must be at
2009 least 20k, in order to receive fullsize (=16k) frames from the client.
2010 That usually happens only in POST/PUT bodies. For other traffic-pat‐
2011 terns smaller values work just fine.
2012
2013 workspace_session
2014 • Units: bytes
2015
2016 • Default: 0.75k
2017
2018 • Minimum: 384b
2019
2020 • Flags: delayed
2021
2022 Allocation size for session structure and workspace. The workspace
2023 is primarily used for TCP connection addresses. If larger than 4k, use
2024 a multiple of 4k for VM efficiency.
2025
2026 workspace_thread
2027 • Units: bytes
2028
2029 • Default: 2k
2030
2031 • Minimum: 0.25k
2032
2033 • Maximum: 8k
2034
2035 • Flags: delayed
2036
2037 Bytes of auxiliary workspace per thread. This workspace is used for
2038 certain temporary data structures during the operation of a worker
2039 thread. One use is for the IO-vectors used during delivery. Setting
2040 this parameter too low may increase the number of writev() syscalls,
2041 setting it too high just wastes space. ~0.1k + UIO_MAXIOV *
2042 sizeof(struct iovec) (typically = ~16k for 64bit) is considered the
2043 maximum sensible value under any known circumstances (excluding exotic
2044 vmod use).
2045
2047 Varnish and bundled tools will, in most cases, exit with one of the
2048 following codes
2049
2050 • 0 OK
2051
2052 • 1 Some error which could be system-dependent and/or transient
2053
2054 • 2 Serious configuration / parameter error - retrying with the same
2055 configuration / parameters is most likely useless
2056
2057 The varnishd master process may also OR its exit code
2058
2059 • with 0x20 when the varnishd child process died,
2060
2061 • with 0x40 when the varnishd child process was terminated by a signal
2062 and
2063
2064 • with 0x80 when a core was dumped.
2065
2067 • varnishlog(1)
2068
2069 • varnishhist(1)
2070
2071 • varnishncsa(1)
2072
2073 • varnishstat(1)
2074
2075 • varnishtop(1)
2076
2077 • varnish-cli(7)
2078
2079 • vcl(7)
2080
2082 The varnishd daemon was developed by Poul-Henning Kamp in cooperation
2083 with Verdens Gang AS and Varnish Software.
2084
2085 This manual page was written by Dag-Erling Smørgrav with updates by
2086 Stig Sandbeck Mathisen <ssm@debian.org>, Nils Goroll and others.
2087
2089 This document is licensed under the same licence as Varnish itself. See
2090 LICENCE for details.
2091
2092 • Copyright (c) 2007-2015 Varnish Software AS
2093
2094
2095
2096
2097 VARNISHD(1)