1VARNISHD(1) VARNISHD(1)
2
3
4
6 varnishd - HTTP accelerator daemon
7
9 varnishd
10 [-a [name=][listen_address[,PROTO]] [-b [host[:port]|path]] [-C]
11 [-d] [-F] [-f config] [-h type[,options]] [-I clifile] [-i iden‐
12 tity] [-j jail[,jailoptions]] [-l vsl] [-M address:port] [-n
13 workdir] [-P file] [-p param=value] [-r param[,param...]] [-S
14 secret-file] [-s [name=]kind[,options]] [-T address[:port]] [-t
15 TTL] [-V] [-W waiter]
16
17 varnishd [-x parameter|vsl|cli|builtin|optstring]
18
19 varnishd [-?]
20
22 The varnishd daemon accepts HTTP requests from clients, passes them on
23 to a backend server and caches the returned documents to better satisfy
24 future requests for the same document.
25
27 Basic options
28 -a <[name=][listen_address[,PROTO]]>
29 Accept for client requests on the specified listen_address (see
30 below).
31
32 Name is referenced in logs. If name is not specified, "a0",
33 "a1", etc. is used.
34
35 PROTO can be "HTTP" (the default) or "PROXY". Both version 1
36 and 2 of the proxy protocol can be used.
37
38 Multiple -a arguments are allowed.
39
40 If no -a argument is given, the default -a :80 will listen to
41 all IPv4 and IPv6 interfaces.
42
43 -a <[name=][ip_address][:port][,PROTO]>
44 The ip_address can be a host name ("localhost"), an IPv4 dot‐
45 ted-quad ("127.0.0.1") or an IPv6 address enclosed in square
46 brackets ("[::1]")
47
48 If port is not specified, port 80 (http) is used.
49
50 At least one of ip_address or port is required.
51
52 -a <[name=][path][,PROTO][,user=name][,group=name][,mode=octal]>
53 (VCL4.1 and higher)
54
55 Accept connections on a Unix domain socket. Path must be abso‐
56 lute ("/path/to/listen.sock").
57
58 The user, group and mode sub-arguments may be used to specify
59 the permissions of the socket file -- use names for user and
60 group, and a 3-digit octal value for mode.
61
62 -b <[host[:port]|path]>
63 Use the specified host as backend server. If port is not speci‐
64 fied, the default is 8080.
65
66 If the value of -b begins with /, it is interpreted as the abso‐
67 lute path of a Unix domain socket to which Varnish connects. In
68 that case, the value of -b must satisfy the conditions required
69 for the .path field of a backend declaration, see vcl(7). Back‐
70 ends with Unix socket addresses may only be used with VCL ver‐
71 sions >= 4.1.
72
73 -b can be used only once, and not together with f.
74
75 -f config
76 Use the specified VCL configuration file instead of the builtin
77 default. See vcl(7) for details on VCL syntax.
78
79 If a single -f option is used, then the VCL instance loaded from
80 the file is named "boot" and immediately becomes active. If more
81 than one -f option is used, the VCL instances are named "boot0",
82 "boot1" and so forth, in the order corresponding to the -f argu‐
83 ments, and the last one is named "boot", which becomes active.
84
85 Either -b or one or more -f options must be specified, but not
86 both, and they cannot both be left out, unless -d is used to
87 start varnishd in debugging mode. If the empty string is speci‐
88 fied as the sole -f option, then varnishd starts without start‐
89 ing the worker process, and the management process will accept
90 CLI commands. You can also combine an empty -f option with an
91 initialization script (-I option) and the child process will be
92 started if there is an active VCL at the end of the initializa‐
93 tion.
94
95 When used with a relative file name, config is searched in the
96 vcl_path. It is possible to set this path prior to using -f op‐
97 tions with a -p option. During startup, varnishd doesn't com‐
98 plain about unsafe VCL paths: unlike the varnish-cli(7) that
99 could later be accessed remotely, starting varnishd requires lo‐
100 cal privileges.
101
102 -n workdir
103 Runtime directory for the shared memory, compiled VCLs etc.
104
105 In performance critical applications, this directory should be
106 on a RAM backed filesystem.
107
108 Relative paths will be appended to /var/run/ (NB: Binary pack‐
109 ages of Varnish may have adjusted this to the platform.)
110
111 The default value is /var/run/varnishd (NB: as above.)
112
113 Documentation options
114 For these options, varnishd prints information to standard output and
115 exits. When a -x option is used, it must be the only option (it outputs
116 documentation in reStructuredText, aka RST).
117
118 -?
119 Print the usage message.
120
121 -x parameter
122 Print documentation of the runtime parameters (-p options), see
123 List of Parameters.
124
125 -x vsl Print documentation of the tags used in the Varnish shared mem‐
126 ory log, see vsl(7).
127
128 -x cli Print documentation of the command line interface, see var‐
129 nish-cli(7).
130
131 -x builtin
132 Print the contents of the default VCL program builtin.vcl.
133
134 -x optstring
135 Print the optstring parameter to getopt(3) to help writing wrap‐
136 per scripts.
137
138 Operations options
139 -F Do not fork, run in the foreground. Only one of -F or -d can be
140 specified, and -F cannot be used together with -C.
141
142 -T <address[:port]>
143 Offer a management interface on the specified address and port.
144 See varnish-cli(7) for documentation of the management commands.
145 To disable the management interface use none.
146
147 -M <address:port>
148 Connect to this port and offer the command line interface.
149 Think of it as a reverse shell. When running with -M and there
150 is no backend defined the child process (the cache) will not
151 start initially.
152
153 -P file
154 Write the PID of the process to the specified file.
155
156 -i identity
157 Specify the identity of the Varnish server. This can be accessed
158 using server.identity from VCL.
159
160 The server identity is used for the received-by field of Via
161 headers generated by Varnish. For this reason, it must be a
162 valid token as defined by the HTTP grammar.
163
164 If not specified the output of gethostname(3) is used, in which
165 case the syntax is assumed to be correct.
166
167 -I clifile
168 Execute the management commands in the file given as clifile be‐
169 fore the the worker process starts, see CLI Command File.
170
171 Tuning options
172 -t TTL Specifies the default time to live (TTL) for cached objects.
173 This is a shortcut for specifying the default_ttl run-time pa‐
174 rameter.
175
176 -p <param=value>
177 Set the parameter specified by param to the specified value, see
178 List of Parameters for details. This option can be used multiple
179 times to specify multiple parameters.
180
181 -s <[name=]type[,options]>
182 Use the specified storage backend. See Storage Backend section.
183
184 This option can be used multiple times to specify multiple stor‐
185 age files. Name is referenced in logs, VCL, statistics, etc. If
186 name is not specified, "s0", "s1" and so forth is used.
187
188 -l <vsl>
189 Specifies size of the space for the VSL records, shorthand for
190 -p vsl_space=<vsl>. Scaling suffixes like 'K' and 'M' can be
191 used up to (G)igabytes. See vsl_space for more information.
192
193 Security options
194 -r <param[,param...]>
195 Make the listed parameters read only. This gives the system ad‐
196 ministrator a way to limit what the Varnish CLI can do. Con‐
197 sider making parameters such as cc_command, vcc_allow_inline_c
198 and vmod_path read only as these can potentially be used to es‐
199 calate privileges from the CLI.
200
201 -S secret-file
202 Path to a file containing a secret used for authorizing access
203 to the management port. To disable authentication use none.
204
205 If this argument is not provided, a secret drawn from the system
206 PRNG will be written to a file called _.secret in the working
207 directory (see opt_n) with default ownership and permissions of
208 the user having started varnish.
209
210 Thus, users wishing to delegate control over varnish will proba‐
211 bly want to create a custom secret file with appropriate permis‐
212 sions (ie. readable by a unix group to delegate control to).
213
214 -j <jail[,jailoptions]>
215 Specify the jailing mechanism to use. See Jail section.
216
217 Advanced, development and debugging options
218 -d Enables debugging mode: The parent process runs in the fore‐
219 ground with a CLI connection on stdin/stdout, and the child
220 process must be started explicitly with a CLI command. Terminat‐
221 ing the parent process will also terminate the child.
222
223 Only one of -d or -F can be specified, and -d cannot be used to‐
224 gether with -C.
225
226 -C Print VCL code compiled to C language and exit. Specify the VCL
227 file to compile with the -f option. Either -f or -b must be used
228 with -C, and -C cannot be used with -F or -d.
229
230 -V Display the version number and exit. This must be the only op‐
231 tion.
232
233 -h <type[,options]>
234 Specifies the hash algorithm. See Hash Algorithm section for a
235 list of supported algorithms.
236
237 -W waiter
238 Specifies the waiter type to use.
239
240 Hash Algorithm
241 The following hash algorithms are available:
242
243 -h critbit
244 self-scaling tree structure. The default hash algorithm in Var‐
245 nish Cache 2.1 and onwards. In comparison to a more traditional
246 B tree the critbit tree is almost completely lockless. Do not
247 change this unless you are certain what you're doing.
248
249 -h simple_list
250 A simple doubly-linked list. Not recommended for production
251 use.
252
253 -h <classic[,buckets]>
254 A standard hash table. The hash key is the CRC32 of the object's
255 URL modulo the size of the hash table. Each table entry points
256 to a list of elements which share the same hash key. The buckets
257 parameter specifies the number of entries in the hash table.
258 The default is 16383.
259
260 Storage Backend
261 The argument format to define storage backends is:
262
263 -s <[name]=kind[,options]>
264 If name is omitted, Varnish will name storages sN, starting with
265 s0 and incrementing N for every new storage.
266
267 For kind and options see details below.
268
269 Storages can be used in vcl as storage.name, so, for example if myStor‐
270 age was defined by -s myStorage=malloc,5G, it could be used in VCL like
271 so:
272
273 set beresp.storage = storage.myStorage;
274
275 A special name is Transient which is the default storage for un‐
276 cacheable objects as resulting from a pass, hit-for-miss or
277 hit-for-pass.
278
279 If no -s options are given, the default is:
280
281 -s default,100m
282
283 If no Transient storage is defined, the default is an unbound default
284 storage as if defined as:
285
286 -s Transient=default
287
288 The following storage types and options are available:
289
290 -s <default[,size]>
291 The default storage type resolves to umem where available and
292 malloc otherwise.
293
294 -s <malloc[,size]>
295 malloc is a memory based backend.
296
297 -s <umem[,size]>
298 umem is a storage backend which is more efficient than malloc on
299 platforms where it is available.
300
301 See the section on umem in chapter Storage backends of The Var‐
302 nish Users Guide for details.
303
304 -s <file,path[,size[,granularity[,advice]]]>
305 The file backend stores data in a file on disk. The file will be
306 accessed using mmap. Note that this storage provide no cache
307 persistence.
308
309 The path is mandatory. If path points to a directory, a tempo‐
310 rary file will be created in that directory and immediately un‐
311 linked. If path points to a non-existing file, the file will be
312 created.
313
314 If size is omitted, and path points to an existing file with a
315 size greater than zero, the size of that file will be used. If
316 not, an error is reported.
317
318 Granularity sets the allocation block size. Defaults to the sys‐
319 tem page size or the filesystem block size, whichever is larger.
320
321 Advice tells the kernel how varnishd expects to use this mapped
322 region so that the kernel can choose the appropriate read-ahead
323 and caching techniques. Possible values are normal, random and
324 sequential, corresponding to MADV_NORMAL, MADV_RANDOM and
325 MADV_SEQUENTIAL madvise() advice argument, respectively. De‐
326 faults to random.
327
328 -s <persistent,path,size>
329 Persistent storage. Varnish will store objects in a file in a
330 manner that will secure the survival of most of the objects in
331 the event of a planned or unplanned shutdown of Varnish. The
332 persistent storage backend has multiple issues with it and will
333 likely be removed from a future version of Varnish.
334
335 Jail
336 Varnish jails are a generalization over various platform specific meth‐
337 ods to reduce the privileges of varnish processes. They may have spe‐
338 cific options. Available jails are:
339
340 -j <solaris[,worker=`privspec`]>
341 Reduce privileges(5) for varnishd and sub-process to the mini‐
342 mally required set. Only available on platforms which have the
343 setppriv(2) call.
344
345 The optional worker argument can be used to pass a privi‐
346 lege-specification (see ppriv(1)) by which to extend the effec‐
347 tive set of the varnish worker process. While extended privi‐
348 leges may be required by custom vmods, it is always the more se‐
349 cure to not use the worker option.
350
351 Example to grant basic privileges to the worker process:
352
353 -j solaris,worker=basic
354
355 -j <unix[,user=`user`][,ccgroup=`group`][,workuser=`user`]>
356 Default on all other platforms when varnishd is started with an
357 effective uid of 0 ("as root").
358
359 With the unix jail mechanism activated, varnish will switch to
360 an alternative user for subprocesses and change the effective
361 uid of the master process whenever possible.
362
363 The optional user argument specifies which alternative user to
364 use. It defaults to varnish.
365
366 The optional ccgroup argument specifies a group to add to var‐
367 nish subprocesses requiring access to a c-compiler. There is no
368 default.
369
370 The optional workuser argument specifies an alternative user to
371 use for the worker process. It defaults to vcache.
372
373 -j none
374 last resort jail choice: With jail mechanism none, varnish will
375 run all processes with the privileges it was started with.
376
377 Management Interface
378 If the -T option was specified, varnishd will offer a command-line man‐
379 agement interface on the specified address and port. The recommended
380 way of connecting to the command-line management interface is through
381 varnishadm(1).
382
383 The commands available are documented in varnish-cli(7).
384
385 CLI Command File
386 The -I option makes it possible to run arbitrary management commands
387 when varnishd is launched, before the worker process is started. In
388 particular, this is the way to load configurations, apply labels to
389 them, and make a VCL instance active that uses those labels on startup:
390
391 vcl.load panic /etc/varnish_panic.vcl
392 vcl.load siteA0 /etc/varnish_siteA.vcl
393 vcl.load siteB0 /etc/varnish_siteB.vcl
394 vcl.load siteC0 /etc/varnish_siteC.vcl
395 vcl.label siteA siteA0
396 vcl.label siteB siteB0
397 vcl.label siteC siteC0
398 vcl.load main /etc/varnish_main.vcl
399 vcl.use main
400
401 Every line in the file, including the last line, must be terminated by
402 a newline or carriage return.
403
404 If a command in the file is prefixed with '-', failure will not abort
405 the startup.
406
407 Note that it is necessary to include an explicit vcl.use command to se‐
408 lect which VCL should be the active VCL when relying on CLI Command
409 File to load the configurations at startup.
410
412 Run Time Parameter Flags
413 Runtime parameters are marked with shorthand flags to avoid repeating
414 the same text over and over in the table below. The meaning of the
415 flags are:
416
417 • experimental
418
419 We have no solid information about good/bad/optimal values for this
420 parameter. Feedback with experience and observations are most wel‐
421 come.
422
423 • delayed
424
425 This parameter can be changed on the fly, but will not take effect
426 immediately.
427
428 • restart
429
430 The worker process must be stopped and restarted, before this parame‐
431 ter takes effect.
432
433 • reload
434
435 The VCL programs must be reloaded for this parameter to take effect.
436
437 • wizard
438
439 Do not touch unless you really know what you're doing.
440
441 • only_root
442
443 Only works if varnishd is running as root.
444
445 Default Value Exceptions on 32 bit Systems
446 Be aware that on 32 bit systems, certain default or maximum values are
447 reduced relative to the values listed below, in order to conserve VM
448 space:
449
450 • workspace_client: 24k
451
452 • workspace_backend: 20k
453
454 • http_resp_size: 8k
455
456 • http_req_size: 12k
457
458 • gzip_buffer: 4k
459
460 • vsl_buffer: 4k
461
462 • vsl_space: 1G (maximum)
463
464 • thread_pool_stack: 64k
465
466 List of Parameters
467 This text is produced from the same text you will find in the CLI if
468 you use the param.show command:
469
470 accept_filter
471 NB: This parameter depends on a feature which is not available on all
472 platforms.
473
474 • Units: bool
475
476 • Default: on (if your platform supports accept filters)
477
478 Enable kernel accept-filters. This may require a kernel module to be
479 loaded to have an effect when enabled.
480
481 Enabling accept_filter may prevent some requests to reach Varnish in
482 the first place. Malformed requests may go unnoticed and not increase
483 the client_req_400 counter. GET or HEAD requests with a body may be
484 blocked altogether.
485
486 acceptor_sleep_decay
487 • Default: 0.9
488
489 • Minimum: 0
490
491 • Maximum: 1
492
493 • Flags: experimental
494
495 If we run out of resources, such as file descriptors or worker threads,
496 the acceptor will sleep between accepts. This parameter (multiplica‐
497 tively) reduce the sleep duration for each successful accept. (ie: 0.9
498 = reduce by 10%)
499
500 acceptor_sleep_incr
501 • Units: seconds
502
503 • Default: 0.000
504
505 • Minimum: 0.000
506
507 • Maximum: 1.000
508
509 • Flags: experimental
510
511 If we run out of resources, such as file descriptors or worker threads,
512 the acceptor will sleep between accepts. This parameter control how
513 much longer we sleep, each time we fail to accept a new connection.
514
515 acceptor_sleep_max
516 • Units: seconds
517
518 • Default: 0.050
519
520 • Minimum: 0.000
521
522 • Maximum: 10.000
523
524 • Flags: experimental
525
526 If we run out of resources, such as file descriptors or worker threads,
527 the acceptor will sleep between accepts. This parameter limits how
528 long it can sleep between attempts to accept new connections.
529
530 auto_restart
531 • Units: bool
532
533 • Default: on
534
535 Automatically restart the child/worker process if it dies.
536
537 backend_idle_timeout
538 • Units: seconds
539
540 • Default: 60.000
541
542 • Minimum: 1.000
543
544 Timeout before we close unused backend connections.
545
546 backend_local_error_holddown
547 • Units: seconds
548
549 • Default: 10.000
550
551 • Minimum: 0.000
552
553 • Flags: experimental
554
555 When connecting to backends, certain error codes (EADDRNOTAVAIL, EAC‐
556 CESS, EPERM) signal a local resource shortage or configuration issue
557 for which retrying connection attempts may worsen the situation due to
558 the complexity of the operations involved in the kernel. This parame‐
559 ter prevents repeated connection attempts for the configured duration.
560
561 backend_remote_error_holddown
562 • Units: seconds
563
564 • Default: 0.250
565
566 • Minimum: 0.000
567
568 • Flags: experimental
569
570 When connecting to backends, certain error codes (ECONNREFUSED, ENETUN‐
571 REACH) signal fundamental connection issues such as the backend not ac‐
572 cepting connections or routing problems for which repeated connection
573 attempts are considered useless This parameter prevents repeated con‐
574 nection attempts for the configured duration.
575
576 ban_cutoff
577 • Units: bans
578
579 • Default: 0
580
581 • Minimum: 0
582
583 • Flags: experimental
584
585 Expurge long tail content from the cache to keep the number of bans be‐
586 low this value. 0 disables.
587
588 When this parameter is set to a non-zero value, the ban lurker contin‐
589 ues to work the ban list as usual top to bottom, but when it reaches
590 the ban_cutoff-th ban, it treats all objects as if they matched a ban
591 and expurges them from cache. As actively used objects get tested
592 against the ban list at request time and thus are likely to be associ‐
593 ated with bans near the top of the ban list, with ban_cutoff, least re‐
594 cently accessed objects (the "long tail") are removed.
595
596 This parameter is a safety net to avoid bad response times due to bans
597 being tested at lookup time. Setting a cutoff trades response time for
598 cache efficiency. The recommended value is proportional to
599 rate(bans_lurker_tests_tested) / n_objects while the ban lurker is
600 working, which is the number of bans the system can sustain. The addi‐
601 tional latency due to request ban testing is in the order of ban_cutoff
602 / rate(bans_lurker_tests_tested). For example, for
603 rate(bans_lurker_tests_tested) = 2M/s and a tolerable latency of 100ms,
604 a good value for ban_cutoff may be 200K.
605
606 ban_dups
607 • Units: bool
608
609 • Default: on
610
611 Eliminate older identical bans when a new ban is added. This saves CPU
612 cycles by not comparing objects to identical bans. This is a waste of
613 time if you have many bans which are never identical.
614
615 ban_lurker_age
616 • Units: seconds
617
618 • Default: 60.000
619
620 • Minimum: 0.000
621
622 The ban lurker will ignore bans until they are this old. When a ban is
623 added, the active traffic will be tested against it as part of object
624 lookup. Because many applications issue bans in bursts, this parameter
625 holds the ban-lurker off until the rush is over. This should be set to
626 the approximate time which a ban-burst takes.
627
628 ban_lurker_batch
629 • Default: 1000
630
631 • Minimum: 1
632
633 The ban lurker sleeps ${ban_lurker_sleep} after examining this many ob‐
634 jects. Use this to pace the ban-lurker if it eats too many resources.
635
636 ban_lurker_holdoff
637 • Units: seconds
638
639 • Default: 0.010
640
641 • Minimum: 0.000
642
643 • Flags: experimental
644
645 How long the ban lurker sleeps when giving way to lookup due to lock
646 contention.
647
648 ban_lurker_sleep
649 • Units: seconds
650
651 • Default: 0.010
652
653 • Minimum: 0.000
654
655 How long the ban lurker sleeps after examining ${ban_lurker_batch} ob‐
656 jects. Use this to pace the ban-lurker if it eats too many resources.
657 A value of zero will disable the ban lurker entirely.
658
659 between_bytes_timeout
660 • Units: seconds
661
662 • Default: 60.000
663
664 • Minimum: 0.000
665
666 We only wait for this many seconds between bytes received from the
667 backend before giving up the fetch. VCL values, per backend or per
668 backend request take precedence. This parameter does not apply to
669 pipe'ed requests.
670
671 cc_command
672 NB: The actual default value for this parameter depends on the Varnish
673 build environment and options.
674
675 • Default: exec $CC $CFLAGS %w -shared -o %o %s
676
677 • Flags: must_reload
678
679 The command used for compiling the C source code to a dlopen(3) load‐
680 able object. The following expansions can be used:
681
682 • %s: the source file name
683
684 • %o: the output file name
685
686 • %w: the cc_warnings parameter
687
688 • %d: the raw default cc_command
689
690 • %D: the expanded default cc_command
691
692 • %n: the working directory (-n option)
693
694 • %%: a percent sign
695
696 Unknown percent expansion sequences are ignored, and to avoid future
697 incompatibilities percent characters should be escaped with a double
698 percent sequence.
699
700 The %d and %D expansions allow passing the parameter's default value to
701 a wrapper script to perform additional processing.
702
703 cc_warnings
704 NB: The actual default value for this parameter depends on the Varnish
705 build environment and options.
706
707 • Default: -Wall -Werror
708
709 • Flags: must_reload
710
711 Warnings used when compiling the C source code with the cc_command pa‐
712 rameter. By default, VCL is compiled with the same set of warnings as
713 Varnish itself.
714
715 cli_limit
716 • Units: bytes
717
718 • Default: 48k
719
720 • Minimum: 128b
721
722 • Maximum: 99999999b
723
724 Maximum size of CLI response. If the response exceeds this limit, the
725 response code will be 201 instead of 200 and the last line will indi‐
726 cate the truncation.
727
728 cli_timeout
729 • Units: seconds
730
731 • Default: 60.000
732
733 • Minimum: 0.000
734
735 Timeout for the child's replies to CLI requests from the mgt_param.
736
737 clock_skew
738 • Units: seconds
739
740 • Default: 10
741
742 • Minimum: 0
743
744 How much clockskew we are willing to accept between the backend and our
745 own clock.
746
747 clock_step
748 • Units: seconds
749
750 • Default: 1.000
751
752 • Minimum: 0.000
753
754 How much observed clock step we are willing to accept before we panic.
755
756 connect_timeout
757 • Units: seconds
758
759 • Default: 3.500
760
761 • Minimum: 0.000
762
763 Default connection timeout for backend connections. We only try to con‐
764 nect to the backend for this many seconds before giving up. VCL can
765 override this default value for each backend and backend request.
766
767 critbit_cooloff
768 • Units: seconds
769
770 • Default: 180.000
771
772 • Minimum: 60.000
773
774 • Maximum: 254.000
775
776 • Flags: wizard
777
778 How long the critbit hasher keeps deleted objheads on the cooloff list.
779
780 debug
781 • Default: none
782
783 Enable/Disable various kinds of debugging.
784
785 none Disable all debugging
786
787 Use +/- prefix to set/reset individual bits:
788
789 req_state
790 VSL Request state engine
791
792 workspace
793 VSL Workspace operations
794
795 waitinglist
796 VSL Waitinglist events
797
798 syncvsl
799 Make VSL synchronous
800
801 hashedge
802 Edge cases in Hash
803
804 vclrel Rapid VCL release
805
806 lurker VSL Ban lurker
807
808 esi_chop
809 Chop ESI fetch to bits
810
811 flush_head
812 Flush after http1 head
813
814 vtc_mode
815 Varnishtest Mode
816
817 witness
818 Emit WITNESS lock records
819
820 vsm_keep
821 Keep the VSM file on restart
822
823 slow_acceptor
824 Slow down Acceptor
825
826 h2_nocheck
827 Disable various H2 checks
828
829 vmod_so_keep
830 Keep copied VMOD libraries
831
832 processors
833 Fetch/Deliver processors
834
835 protocol
836 Protocol debugging
837
838 vcl_keep
839 Keep VCL C and so files
840
841 lck Additional lock statistics
842
843 default_grace
844 • Units: seconds
845
846 • Default: 10s
847
848 • Minimum: 0.000
849
850 • Flags: obj_sticky
851
852 Default grace period. We will deliver an object this long after it has
853 expired, provided another thread is attempting to get a new copy.
854
855 default_keep
856 • Units: seconds
857
858 • Default: 0s
859
860 • Minimum: 0.000
861
862 • Flags: obj_sticky
863
864 Default keep period. We will keep a useless object around this long,
865 making it available for conditional backend fetches. That means that
866 the object will be removed from the cache at the end of ttl+grace+keep.
867
868 default_ttl
869 • Units: seconds
870
871 • Default: 2m
872
873 • Minimum: 0.000
874
875 • Flags: obj_sticky
876
877 The TTL assigned to objects if neither the backend nor the VCL code as‐
878 signs one.
879
880 experimental
881 • Default: none
882
883 Enable/Disable experimental features.
884
885 none Disable all experimental features
886
887 Use +/- prefix to set/reset individual bits:
888
889 drop_pools
890 Drop thread pools
891
892 feature
893 • Default: +validate_headers
894
895 Enable/Disable various minor features.
896
897 default
898 Set default value
899
900 none Disable all features.
901
902 Use +/- prefix to enable/disable individual feature:
903
904 http2 Enable HTTP/2 protocol support.
905
906 short_panic
907 Short panic message.
908
909 no_coredump
910 No coredumps. Must be set before child process starts.
911
912 https_scheme
913 Extract host from full URI in the HTTP/1 request line, if the
914 scheme is https.
915
916 http_date_postel
917 Tolerate non compliant timestamp headers like Date, Last-Mod‐
918 ified, Expires etc.
919
920 esi_ignore_https
921 Convert <esi:include src"https://... to http://...
922
923 esi_disable_xml_check
924 Allow ESI processing on non-XML ESI bodies
925
926 esi_ignore_other_elements
927 Ignore XML syntax errors in ESI bodies.
928
929 esi_remove_bom
930 Ignore UTF-8 BOM in ESI bodies.
931
932 esi_include_onerror
933 Parse the onerror attribute of <esi:include> tags.
934
935 wait_silo
936 Wait for persistent silos to completely load before serving
937 requests.
938
939 validate_headers
940 Validate all header set operations to conform to RFC7230.
941
942 busy_stats_rate
943 Make busy workers comply with thread_stats_rate.
944
945 fetch_chunksize
946 • Units: bytes
947
948 • Default: 16k
949
950 • Minimum: 4k
951
952 • Flags: experimental
953
954 The default chunksize used by fetcher. This should be bigger than the
955 majority of objects with short TTLs. Internal limits in the stor‐
956 age_file module makes increases above 128kb a dubious idea.
957
958 fetch_maxchunksize
959 • Units: bytes
960
961 • Default: 0.25G
962
963 • Minimum: 64k
964
965 • Flags: experimental
966
967 The maximum chunksize we attempt to allocate from storage. Making this
968 too large may cause delays and storage fragmentation.
969
970 first_byte_timeout
971 • Units: seconds
972
973 • Default: 60.000
974
975 • Minimum: 0.000
976
977 Default timeout for receiving first byte from backend. We only wait for
978 this many seconds for the first byte before giving up. VCL can over‐
979 ride this default value for each backend and backend request. This pa‐
980 rameter does not apply to pipe'ed requests.
981
982 gzip_buffer
983 • Units: bytes
984
985 • Default: 32k
986
987 • Minimum: 2k
988
989 • Flags: experimental
990
991 Size of malloc buffer used for gzip processing. These buffers are used
992 for in-transit data, for instance gunzip'ed data being sent to a
993 client.Making this space to small results in more overhead, writes to
994 sockets etc, making it too big is probably just a waste of memory.
995
996 gzip_level
997 • Default: 6
998
999 • Minimum: 0
1000
1001 • Maximum: 9
1002
1003 Gzip compression level: 0=debug, 1=fast, 9=best
1004
1005 gzip_memlevel
1006 • Default: 8
1007
1008 • Minimum: 1
1009
1010 • Maximum: 9
1011
1012 Gzip memory level 1=slow/least, 9=fast/most compression. Memory impact
1013 is 1=1k, 2=2k, ... 9=256k.
1014
1015 h2_header_table_size
1016 • Units: bytes
1017
1018 • Default: 4k
1019
1020 • Minimum: 0b
1021
1022 HTTP2 header table size. This is the size that will be used for the
1023 HPACK dynamic decoding table.
1024
1025 h2_initial_window_size
1026 • Units: bytes
1027
1028 • Default: 65535b
1029
1030 • Minimum: 65535b
1031
1032 • Maximum: 2147483647b
1033
1034 HTTP2 initial flow control window size.
1035
1036 h2_max_concurrent_streams
1037 • Units: streams
1038
1039 • Default: 100
1040
1041 • Minimum: 0
1042
1043 HTTP2 Maximum number of concurrent streams. This is the number of re‐
1044 quests that can be active at the same time for a single HTTP2 connec‐
1045 tion.
1046
1047 h2_max_frame_size
1048 • Units: bytes
1049
1050 • Default: 16k
1051
1052 • Minimum: 16k
1053
1054 • Maximum: 16777215b
1055
1056 HTTP2 maximum per frame payload size we are willing to accept.
1057
1058 h2_max_header_list_size
1059 • Units: bytes
1060
1061 • Default: 2147483647b
1062
1063 • Minimum: 0b
1064
1065 HTTP2 maximum size of an uncompressed header list.
1066
1067 h2_rx_window_increment
1068 • Units: bytes
1069
1070 • Default: 1M
1071
1072 • Minimum: 1M
1073
1074 • Maximum: 1G
1075
1076 • Flags: wizard
1077
1078 HTTP2 Receive Window Increments. How big credits we send in WINDOW_UP‐
1079 DATE frames Only affects incoming request bodies (ie: POST, PUT etc.)
1080
1081 h2_rx_window_low_water
1082 • Units: bytes
1083
1084 • Default: 10M
1085
1086 • Minimum: 65535b
1087
1088 • Maximum: 1G
1089
1090 • Flags: wizard
1091
1092 HTTP2 Receive Window low water mark. We try to keep the window at
1093 least this big Only affects incoming request bodies (ie: POST, PUT
1094 etc.)
1095
1096 h2_rxbuf_storage
1097 • Default: Transient
1098
1099 • Flags: must_restart
1100
1101 The name of the storage backend that HTTP/2 receive buffers should be
1102 allocated from.
1103
1104 http1_iovs
1105 • Units: struct iovec (=16 bytes)
1106
1107 • Default: 64
1108
1109 • Minimum: 5
1110
1111 • Maximum: 1024
1112
1113 • Flags: wizard
1114
1115 Number of io vectors to allocate for HTTP1 protocol transmission. A
1116 HTTP1 header needs 7 + 2 per HTTP header field. Allocated from
1117 workspace_thread.
1118
1119 http_gzip_support
1120 • Units: bool
1121
1122 • Default: on
1123
1124 Enable gzip support. When enabled Varnish request compressed objects
1125 from the backend and store them compressed. If a client does not sup‐
1126 port gzip encoding Varnish will uncompress compressed objects on de‐
1127 mand. Varnish will also rewrite the Accept-Encoding header of clients
1128 indicating support for gzip to:
1129 Accept-Encoding: gzip
1130
1131 Clients that do not support gzip will have their Accept-Encoding header
1132 removed. For more information on how gzip is implemented please see the
1133 chapter on gzip in the Varnish reference.
1134
1135 When gzip support is disabled the variables beresp.do_gzip and
1136 beresp.do_gunzip have no effect in VCL.
1137
1138 http_max_hdr
1139 • Units: header lines
1140
1141 • Default: 64
1142
1143 • Minimum: 32
1144
1145 • Maximum: 65535
1146
1147 Maximum number of HTTP header lines we allow in
1148 {req|resp|bereq|beresp}.http (obj.http is autosized to the exact number
1149 of headers). Cheap, ~20 bytes, in terms of workspace memory. Note
1150 that the first line occupies five header lines.
1151
1152 http_range_support
1153 • Units: bool
1154
1155 • Default: on
1156
1157 Enable support for HTTP Range headers.
1158
1159 http_req_hdr_len
1160 • Units: bytes
1161
1162 • Default: 8k
1163
1164 • Minimum: 40b
1165
1166 Maximum length of any HTTP client request header we will allow. The
1167 limit is inclusive its continuation lines.
1168
1169 http_req_size
1170 • Units: bytes
1171
1172 • Default: 32k
1173
1174 • Minimum: 0.25k
1175
1176 Maximum number of bytes of HTTP client request we will deal with. This
1177 is a limit on all bytes up to the double blank line which ends the HTTP
1178 request. The memory for the request is allocated from the client
1179 workspace (param: workspace_client) and this parameter limits how much
1180 of that the request is allowed to take up.
1181
1182 http_resp_hdr_len
1183 • Units: bytes
1184
1185 • Default: 8k
1186
1187 • Minimum: 40b
1188
1189 Maximum length of any HTTP backend response header we will allow. The
1190 limit is inclusive its continuation lines.
1191
1192 http_resp_size
1193 • Units: bytes
1194
1195 • Default: 32k
1196
1197 • Minimum: 0.25k
1198
1199 Maximum number of bytes of HTTP backend response we will deal with.
1200 This is a limit on all bytes up to the double blank line which ends the
1201 HTTP response. The memory for the response is allocated from the back‐
1202 end workspace (param: workspace_backend) and this parameter limits how
1203 much of that the response is allowed to take up.
1204
1205 idle_send_timeout
1206 • Units: seconds
1207
1208 • Default: 60.000
1209
1210 • Minimum: 0.000
1211
1212 • Flags: delayed
1213
1214 Send timeout for individual pieces of data on client connections. May
1215 get extended if 'send_timeout' applies.
1216
1217 When this timeout is hit, the session is closed.
1218
1219 See the man page for setsockopt(2) or socket(7) under SO_SNDTIMEO for
1220 more information.
1221
1222 listen_depth
1223 • Units: connections
1224
1225 • Default: 1024
1226
1227 • Minimum: 0
1228
1229 • Flags: must_restart
1230
1231 Listen queue depth.
1232
1233 lru_interval
1234 • Units: seconds
1235
1236 • Default: 2.000
1237
1238 • Minimum: 0.000
1239
1240 • Flags: experimental
1241
1242 Grace period before object moves on LRU list. Objects are only moved
1243 to the front of the LRU list if they have not been moved there already
1244 inside this timeout period. This reduces the amount of lock operations
1245 necessary for LRU list access.
1246
1247 max_esi_depth
1248 • Units: levels
1249
1250 • Default: 5
1251
1252 • Minimum: 0
1253
1254 Maximum depth of esi:include processing.
1255
1256 max_restarts
1257 • Units: restarts
1258
1259 • Default: 4
1260
1261 • Minimum: 0
1262
1263 Upper limit on how many times a request can restart.
1264
1265 max_retries
1266 • Units: retries
1267
1268 • Default: 4
1269
1270 • Minimum: 0
1271
1272 Upper limit on how many times a backend fetch can retry.
1273
1274 max_vcl
1275 • Default: 100
1276
1277 • Minimum: 0
1278
1279 Threshold of loaded VCL programs. (VCL labels are not counted.) Pa‐
1280 rameter max_vcl_handling determines behaviour.
1281
1282 max_vcl_handling
1283 • Default: 1
1284
1285 • Minimum: 0
1286
1287 • Maximum: 2
1288
1289 Behaviour when attempting to exceed max_vcl loaded VCL.
1290
1291 • 0 - Ignore max_vcl parameter.
1292
1293 • 1 - Issue warning.
1294
1295 • 2 - Refuse loading VCLs.
1296
1297 nuke_limit
1298 • Units: allocations
1299
1300 • Default: 50
1301
1302 • Minimum: 0
1303
1304 • Flags: experimental
1305
1306 Maximum number of objects we attempt to nuke in order to make space for
1307 a object body.
1308
1309 pcre2_depth_limit
1310 • Default: 20
1311
1312 • Minimum: 1
1313
1314 The recursion depth-limit for the internal match logic in a
1315 pcre2_match().
1316
1317 (See: pcre2_set_depth_limit() in pcre2 docs.)
1318
1319 This puts an upper limit on the amount of stack used by PCRE2 for cer‐
1320 tain classes of regular expressions.
1321
1322 We have set the default value low in order to prevent crashes, at the
1323 cost of possible regexp matching failures.
1324
1325 Matching failures will show up in the log as VCL_Error messages.
1326
1327 pcre2_jit_compilation
1328 • Units: bool
1329
1330 • Default: on
1331
1332 Use the pcre2 JIT compiler if available.
1333
1334 pcre2_match_limit
1335 • Default: 10000
1336
1337 • Minimum: 1
1338
1339 The limit for the number of calls to the internal match logic in
1340 pcre2_match().
1341
1342 (See: pcre2_set_match_limit() in pcre2 docs.)
1343
1344 This parameter limits how much CPU time regular expression matching can
1345 soak up.
1346
1347 ping_interval
1348 • Units: seconds
1349
1350 • Default: 3
1351
1352 • Minimum: 0
1353
1354 • Flags: must_restart
1355
1356 Interval between pings from parent to child. Zero will disable pinging
1357 entirely, which makes it possible to attach a debugger to the child.
1358
1359 pipe_sess_max
1360 • Units: connections
1361
1362 • Default: 0
1363
1364 • Minimum: 0
1365
1366 Maximum number of sessions dedicated to pipe transactions.
1367
1368 pipe_timeout
1369 • Units: seconds
1370
1371 • Default: 60.000
1372
1373 • Minimum: 0.000
1374
1375 Idle timeout for PIPE sessions. If nothing have been received in either
1376 direction for this many seconds, the session is closed.
1377
1378 pool_req
1379 • Default: 10,100,10
1380
1381 Parameters for per worker pool request memory pool.
1382
1383 The three numbers are:
1384
1385 min_pool
1386 minimum size of free pool.
1387
1388 max_pool
1389 maximum size of free pool.
1390
1391 max_age
1392 max age of free element.
1393
1394 pool_sess
1395 • Default: 10,100,10
1396
1397 Parameters for per worker pool session memory pool.
1398
1399 The three numbers are:
1400
1401 min_pool
1402 minimum size of free pool.
1403
1404 max_pool
1405 maximum size of free pool.
1406
1407 max_age
1408 max age of free element.
1409
1410 pool_vbo
1411 • Default: 10,100,10
1412
1413 Parameters for backend object fetch memory pool.
1414
1415 The three numbers are:
1416
1417 min_pool
1418 minimum size of free pool.
1419
1420 max_pool
1421 maximum size of free pool.
1422
1423 max_age
1424 max age of free element.
1425
1426 prefer_ipv6
1427 • Units: bool
1428
1429 • Default: off
1430
1431 Prefer IPv6 address when connecting to backends which have both IPv4
1432 and IPv6 addresses.
1433
1434 rush_exponent
1435 • Units: requests per request
1436
1437 • Default: 3
1438
1439 • Minimum: 2
1440
1441 • Flags: experimental
1442
1443 How many parked request we start for each completed request on the ob‐
1444 ject. NB: Even with the implict delay of delivery, this parameter con‐
1445 trols an exponential increase in number of worker threads.
1446
1447 send_timeout
1448 • Units: seconds
1449
1450 • Default: 600.000
1451
1452 • Minimum: 0.000
1453
1454 • Flags: delayed
1455
1456 Total timeout for ordinary HTTP1 responses. Does not apply to some in‐
1457 ternally generated errors and pipe mode.
1458
1459 When 'idle_send_timeout' is hit while sending an HTTP1 response, the
1460 timeout is extended unless the total time already taken for sending the
1461 response in its entirety exceeds this many seconds.
1462
1463 When this timeout is hit, the session is closed
1464
1465 shortlived
1466 • Units: seconds
1467
1468 • Default: 10.000
1469
1470 • Minimum: 0.000
1471
1472 Objects created with (ttl+grace+keep) shorter than this are always put
1473 in transient storage.
1474
1475 sigsegv_handler
1476 • Units: bool
1477
1478 • Default: on
1479
1480 • Flags: must_restart
1481
1482 Install a signal handler which tries to dump debug information on seg‐
1483 mentation faults, bus errors and abort signals.
1484
1485 syslog_cli_traffic
1486 • Units: bool
1487
1488 • Default: on
1489
1490 Log all CLI traffic to syslog(LOG_INFO).
1491
1492 tcp_fastopen
1493 NB: This parameter depends on a feature which is not available on all
1494 platforms.
1495
1496 • Units: bool
1497
1498 • Default: off
1499
1500 • Flags: must_restart
1501
1502 Enable TCP Fast Open extension.
1503
1504 tcp_keepalive_intvl
1505 NB: This parameter depends on a feature which is not available on all
1506 platforms.
1507
1508 • Units: seconds
1509
1510 • Default: platform dependent
1511
1512 • Minimum: 1.000
1513
1514 • Maximum: 100.000
1515
1516 • Flags: experimental
1517
1518 The number of seconds between TCP keep-alive probes. Ignored for Unix
1519 domain sockets.
1520
1521 tcp_keepalive_probes
1522 NB: This parameter depends on a feature which is not available on all
1523 platforms.
1524
1525 • Units: probes
1526
1527 • Default: platform dependent
1528
1529 • Minimum: 1
1530
1531 • Maximum: 100
1532
1533 • Flags: experimental
1534
1535 The maximum number of TCP keep-alive probes to send before giving up
1536 and killing the connection if no response is obtained from the other
1537 end. Ignored for Unix domain sockets.
1538
1539 tcp_keepalive_time
1540 NB: This parameter depends on a feature which is not available on all
1541 platforms.
1542
1543 • Units: seconds
1544
1545 • Default: platform dependent
1546
1547 • Minimum: 1.000
1548
1549 • Maximum: 7200.000
1550
1551 • Flags: experimental
1552
1553 The number of seconds a connection needs to be idle before TCP begins
1554 sending out keep-alive probes. Ignored for Unix domain sockets.
1555
1556 thread_pool_add_delay
1557 • Units: seconds
1558
1559 • Default: 0.000
1560
1561 • Minimum: 0.000
1562
1563 • Flags: experimental
1564
1565 Wait at least this long after creating a thread.
1566
1567 Some (buggy) systems may need a short (sub-second) delay between creat‐
1568 ing threads. Set this to a few milliseconds if you see the
1569 'threads_failed' counter grow too much.
1570
1571 Setting this too high results in insufficient worker threads.
1572
1573 thread_pool_destroy_delay
1574 • Units: seconds
1575
1576 • Default: 1.000
1577
1578 • Minimum: 0.010
1579
1580 • Flags: delayed, experimental
1581
1582 Wait this long after destroying a thread.
1583
1584 This controls the decay of thread pools when idle(-ish).
1585
1586 thread_pool_fail_delay
1587 • Units: seconds
1588
1589 • Default: 0.200
1590
1591 • Minimum: 0.010
1592
1593 • Flags: experimental
1594
1595 Wait at least this long after a failed thread creation before trying to
1596 create another thread.
1597
1598 Failure to create a worker thread is often a sign that the end is
1599 near, because the process is running out of some resource. This delay
1600 tries to not rush the end on needlessly.
1601
1602 If thread creation failures are a problem, check that thread_pool_max
1603 is not too high.
1604
1605 It may also help to increase thread_pool_timeout and thread_pool_min,
1606 to reduce the rate at which treads are destroyed and later recreated.
1607
1608 thread_pool_max
1609 • Units: threads
1610
1611 • Default: 5000
1612
1613 • Minimum: thread_pool_min
1614
1615 • Flags: delayed
1616
1617 The maximum number of worker threads in each pool.
1618
1619 Do not set this higher than you have to, since excess worker threads
1620 soak up RAM and CPU and generally just get in the way of getting work
1621 done.
1622
1623 thread_pool_min
1624 • Units: threads
1625
1626 • Default: 100
1627
1628 • Minimum: 5
1629
1630 • Maximum: thread_pool_max
1631
1632 • Flags: delayed
1633
1634 The minimum number of worker threads in each pool.
1635
1636 Increasing this may help ramp up faster from low load situations or
1637 when threads have expired.
1638
1639 Technical minimum is 5 threads, but this parameter is strongly recom‐
1640 mended to be at least 10
1641
1642 thread_pool_reserve
1643 • Units: threads
1644
1645 • Default: 0 (auto-tune: 5% of thread_pool_min)
1646
1647 • Maximum: 95% of thread_pool_min
1648
1649 • Flags: delayed
1650
1651 The number of worker threads reserved for vital tasks in each pool.
1652
1653 Tasks may require other tasks to complete (for example, client requests
1654 may require backend requests, http2 sessions require streams, which re‐
1655 quire requests). This reserve is to ensure that lower priority tasks do
1656 not prevent higher priority tasks from running even under high load.
1657
1658 The effective value is at least 5 (the number of internal priority
1659 classes), irrespective of this parameter.
1660
1661 thread_pool_stack
1662 • Units: bytes
1663
1664 • Default: 80k
1665
1666 • Minimum: sysconf(_SC_THREAD_STACK_MIN)
1667
1668 • Flags: delayed
1669
1670 Worker thread stack size. This will likely be rounded up to a multiple
1671 of 4k (or whatever the page_size might be) by the kernel.
1672
1673 The required stack size is primarily driven by the depth of the
1674 call-tree. The most common relevant determining factors in varnish core
1675 code are GZIP (un)compression, ESI processing and regular expression
1676 matches. VMODs may also require significant amounts of additional
1677 stack. The nesting depth of VCL subs is another factor, although typi‐
1678 cally not predominant.
1679
1680 The stack size is per thread, so the maximum total memory required for
1681 worker thread stacks is in the order of size = thread_pools x
1682 thread_pool_max x thread_pool_stack.
1683
1684 Thus, in particular for setups with many threads, keeping the stack
1685 size at a minimum helps reduce the amount of memory required by Var‐
1686 nish.
1687
1688 On the other hand, thread_pool_stack must be large enough under all
1689 circumstances, otherwise varnish will crash due to a stack overflow.
1690 Usually, a stack overflow manifests itself as a segmentation fault (aka
1691 segfault / SIGSEGV) with the faulting address being near the stack
1692 pointer (sp).
1693
1694 Unless stack usage can be reduced, thread_pool_stack must be increased
1695 when a stack overflow occurs. Setting it in 150%-200% increments is
1696 recommended until stack overflows cease to occur.
1697
1698 thread_pool_timeout
1699 • Units: seconds
1700
1701 • Default: 300.000
1702
1703 • Minimum: 10.000
1704
1705 • Flags: delayed, experimental
1706
1707 Thread idle threshold.
1708
1709 Threads in excess of thread_pool_min, which have been idle for at least
1710 this long, will be destroyed.
1711
1712 thread_pool_watchdog
1713 • Units: seconds
1714
1715 • Default: 60.000
1716
1717 • Minimum: 0.100
1718
1719 • Flags: experimental
1720
1721 Thread queue stuck watchdog.
1722
1723 If no queued work have been released for this long, the worker process
1724 panics itself.
1725
1726 thread_pools
1727 • Units: pools
1728
1729 • Default: 2
1730
1731 • Minimum: 1
1732
1733 • Maximum: 32
1734
1735 • Flags: delayed, experimental
1736
1737 Number of worker thread pools.
1738
1739 Increasing the number of worker pools decreases lock contention. Each
1740 worker pool also has a thread accepting new connections, so for very
1741 high rates of incoming new connections on systems with many cores, in‐
1742 creasing the worker pools may be required.
1743
1744 Too many pools waste CPU and RAM resources, and more than one pool for
1745 each CPU is most likely detrimental to performance.
1746
1747 Can be increased on the fly, but decreases require a restart to take
1748 effect, unless the drop_pools experimental debug flag is set.
1749
1750 thread_queue_limit
1751 • Units: requests
1752
1753 • Default: 20
1754
1755 • Minimum: 0
1756
1757 • Flags: experimental
1758
1759 Permitted request queue length per thread-pool.
1760
1761 This sets the number of requests we will queue, waiting for an avail‐
1762 able thread. Above this limit sessions will be dropped instead of
1763 queued.
1764
1765 thread_stats_rate
1766 • Units: requests
1767
1768 • Default: 10
1769
1770 • Minimum: 0
1771
1772 • Flags: experimental
1773
1774 Worker threads accumulate statistics, and dump these into the global
1775 stats counters if the lock is free when they finish a job (re‐
1776 quest/fetch etc.) This parameters defines the maximum number of jobs a
1777 worker thread may handle, before it is forced to dump its accumulated
1778 stats into the global counters.
1779
1780 timeout_idle
1781 • Units: seconds
1782
1783 • Default: 5.000
1784
1785 • Minimum: 0.000
1786
1787 Idle timeout for client connections.
1788
1789 A connection is considered idle until we have received the full request
1790 headers.
1791
1792 This parameter is particularly relevant for HTTP1 keepalive connec‐
1793 tions which are closed unless the next request is received before this
1794 timeout is reached.
1795
1796 timeout_linger
1797 • Units: seconds
1798
1799 • Default: 0.050
1800
1801 • Minimum: 0.000
1802
1803 • Flags: experimental
1804
1805 How long the worker thread lingers on an idle session before handing it
1806 over to the waiter. When sessions are reused, as much as half of all
1807 reuses happen within the first 100 msec of the previous request com‐
1808 pleting. Setting this too high results in worker threads not doing
1809 anything for their keep, setting it too low just means that more ses‐
1810 sions take a detour around the waiter.
1811
1812 vary_notice
1813 • Units: variants
1814
1815 • Default: 10
1816
1817 • Minimum: 1
1818
1819 How many variants need to be evaluated to log a Notice that there might
1820 be too many variants.
1821
1822 vcc_allow_inline_c
1823 Deprecated alias for the vcc_feature parameter.
1824
1825 vcc_err_unref
1826 Deprecated alias for the vcc_feature parameter.
1827
1828 vcc_feature
1829 • Default: +err_unref,+unsafe_path
1830
1831 Enable/Disable various VCC behaviors.
1832
1833 default
1834 Set default value
1835
1836 none Disable all behaviors.
1837
1838 Use +/- prefix to enable/disable individual behavior:
1839
1840 err_unref
1841 Unreferenced VCL objects result in error.
1842
1843 allow_inline_c
1844 Allow inline C code in VCL.
1845
1846 unsafe_path
1847 Allow '/' in vmod & include paths. Allow 'import ... from
1848 ...'.
1849
1850 vcc_unsafe_path
1851 Deprecated alias for the vcc_feature parameter.
1852
1853 vcl_cooldown
1854 • Units: seconds
1855
1856 • Default: 600.000
1857
1858 • Minimum: 1.000
1859
1860 How long a VCL is kept warm after being replaced as the active VCL
1861 (granularity approximately 30 seconds).
1862
1863 vcl_path
1864 NB: The actual default value for this parameter depends on the Varnish
1865 build environment and options.
1866
1867 • Default: ${sysconfdir}/varnish:${datadir}/varnish/vcl
1868
1869 Directory (or colon separated list of directories) from which relative
1870 VCL filenames (vcl.load and include) are to be found. By default Var‐
1871 nish searches VCL files in both the system configuration and shared
1872 data directories to allow packages to drop their VCL files in a stan‐
1873 dard location where relative includes would work.
1874
1875 vmod_path
1876 NB: The actual default value for this parameter depends on the Varnish
1877 build environment and options.
1878
1879 • Default: ${libdir}/varnish/vmods
1880
1881 Directory (or colon separated list of directories) where VMODs are to
1882 be found.
1883
1884 vsl_buffer
1885 • Units: bytes
1886
1887 • Default: 16k
1888
1889 • Minimum: vsl_reclen + 12 bytes
1890
1891 Bytes of (req-/backend-)workspace dedicated to buffering VSL records.
1892 When this parameter is adjusted, most likely workspace_client and
1893 workspace_backend will have to be adjusted by the same amount.
1894
1895 Setting this too high costs memory, setting it too low will cause more
1896 VSL flushes and likely increase lock-contention on the VSL mutex.
1897
1898 vsl_mask
1899 • Default: -Debug,-ObjProtocol,-ObjStatus,-ObjReason,-Obj‐
1900 Header,-VCL_trace,-ExpKill,-WorkThread,-Hash,-VfpAcct,-H2Rx‐
1901 Hdr,-H2RxBody,-H2TxHdr,-H2TxBody,-VdpAcct
1902
1903 Mask individual VSL messages from being logged.
1904
1905 default
1906 Set default value
1907
1908 Use +/- prefix in front of VSL tag name to unmask/mask individual VSL
1909 messages.
1910
1911 vsl_reclen
1912 • Units: bytes
1913
1914 • Default: 255b
1915
1916 • Minimum: 16b
1917
1918 • Maximum: vsl_buffer - 12 bytes
1919
1920 Maximum number of bytes in SHM log record.
1921
1922 vsl_space
1923 • Units: bytes
1924
1925 • Default: 80M
1926
1927 • Minimum: 1M
1928
1929 • Maximum: 4G
1930
1931 • Flags: must_restart
1932
1933 The amount of space to allocate for the VSL fifo buffer in the VSM mem‐
1934 ory segment. If you make this too small, varnish{ncsa|log} etc will
1935 not be able to keep up. Making it too large just costs memory re‐
1936 sources.
1937
1938 vsm_free_cooldown
1939 • Units: seconds
1940
1941 • Default: 60.000
1942
1943 • Minimum: 10.000
1944
1945 • Maximum: 600.000
1946
1947 How long VSM memory is kept warm after a deallocation (granularity ap‐
1948 proximately 2 seconds).
1949
1950 workspace_backend
1951 • Units: bytes
1952
1953 • Default: 96k
1954
1955 • Minimum: 1k
1956
1957 • Flags: delayed
1958
1959 Bytes of HTTP protocol workspace for backend HTTP req/resp. If larger
1960 than 4k, use a multiple of 4k for VM efficiency.
1961
1962 workspace_client
1963 • Units: bytes
1964
1965 • Default: 96k
1966
1967 • Minimum: 9k
1968
1969 • Flags: delayed
1970
1971 Bytes of HTTP protocol workspace for clients HTTP req/resp. Use a mul‐
1972 tiple of 4k for VM efficiency. For HTTP/2 compliance this must be at
1973 least 20k, in order to receive fullsize (=16k) frames from the client.
1974 That usually happens only in POST/PUT bodies. For other traffic-pat‐
1975 terns smaller values work just fine.
1976
1977 workspace_session
1978 • Units: bytes
1979
1980 • Default: 0.75k
1981
1982 • Minimum: 384b
1983
1984 • Flags: delayed
1985
1986 Allocation size for session structure and workspace. The workspace
1987 is primarily used for TCP connection addresses. If larger than 4k, use
1988 a multiple of 4k for VM efficiency.
1989
1990 workspace_thread
1991 • Units: bytes
1992
1993 • Default: 2k
1994
1995 • Minimum: 0.25k
1996
1997 • Maximum: 8k
1998
1999 • Flags: delayed
2000
2001 Bytes of auxiliary workspace per thread. This workspace is used for
2002 certain temporary data structures during the operation of a worker
2003 thread. One use is for the IO-vectors used during delivery. Setting
2004 this parameter too low may increase the number of writev() syscalls,
2005 setting it too high just wastes space. ~0.1k + UIO_MAXIOV *
2006 sizeof(struct iovec) (typically = ~16k for 64bit) is considered the
2007 maximum sensible value under any known circumstances (excluding exotic
2008 vmod use).
2009
2011 Varnish and bundled tools will, in most cases, exit with one of the
2012 following codes
2013
2014 • 0 OK
2015
2016 • 1 Some error which could be system-dependent and/or transient
2017
2018 • 2 Serious configuration / parameter error - retrying with the same
2019 configuration / parameters is most likely useless
2020
2021 The varnishd master process may also OR its exit code
2022
2023 • with 0x20 when the varnishd child process died,
2024
2025 • with 0x40 when the varnishd child process was terminated by a signal
2026 and
2027
2028 • with 0x80 when a core was dumped.
2029
2031 • varnishlog(1)
2032
2033 • varnishhist(1)
2034
2035 • varnishncsa(1)
2036
2037 • varnishstat(1)
2038
2039 • varnishtop(1)
2040
2041 • varnish-cli(7)
2042
2043 • vcl(7)
2044
2046 The varnishd daemon was developed by Poul-Henning Kamp in cooperation
2047 with Verdens Gang AS and Varnish Software.
2048
2049 This manual page was written by Dag-Erling Smørgrav with updates by
2050 Stig Sandbeck Mathisen <ssm@debian.org>, Nils Goroll and others.
2051
2053 This document is licensed under the same licence as Varnish itself. See
2054 LICENCE for details.
2055
2056 • Copyright (c) 2007-2015 Varnish Software AS
2057
2058
2059
2060
2061 VARNISHD(1)